Test Report: Docker_Linux 16143

                    
                      a0e5cd1e5267706772177418d12de4a287eda5c3:2023-03-23:28462
                    
                

Test fail (1/313)

Order failed test Duration
260 TestPause/serial/SecondStartNoReconfiguration 75.12
x
+
TestPause/serial/SecondStartNoReconfiguration (75.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-574316 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-574316 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m9.81177555s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-574316] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-574316 in cluster pause-574316
	* Pulling base image ...
	* Updating the running docker "pause-574316" container ...
	* Preparing Kubernetes v1.26.3 on Docker 23.0.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-574316" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0323 23:25:38.794014  401618 out.go:296] Setting OutFile to fd 1 ...
	I0323 23:25:38.794225  401618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:25:38.794240  401618 out.go:309] Setting ErrFile to fd 2...
	I0323 23:25:38.794262  401618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:25:38.794456  401618 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16143-62012/.minikube/bin
	I0323 23:25:38.795236  401618 out.go:303] Setting JSON to false
	I0323 23:25:38.797716  401618 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7685,"bootTime":1679606254,"procs":1031,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0323 23:25:38.797813  401618 start.go:135] virtualization: kvm guest
	I0323 23:25:38.801127  401618 out.go:177] * [pause-574316] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0323 23:25:38.803311  401618 out.go:177]   - MINIKUBE_LOCATION=16143
	I0323 23:25:38.803314  401618 notify.go:220] Checking for updates...
	I0323 23:25:38.805210  401618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0323 23:25:38.807168  401618 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	I0323 23:25:38.809028  401618 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	I0323 23:25:38.810723  401618 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0323 23:25:38.812214  401618 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0323 23:25:38.814210  401618 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:25:38.814624  401618 driver.go:365] Setting default libvirt URI to qemu:///system
	I0323 23:25:38.898698  401618 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0323 23:25:38.898809  401618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0323 23:25:39.028887  401618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:47 SystemTime:2023-03-23 23:25:39.019585714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0323 23:25:39.029046  401618 docker.go:294] overlay module found
	I0323 23:25:39.031181  401618 out.go:177] * Using the docker driver based on existing profile
	I0323 23:25:39.032527  401618 start.go:295] selected driver: docker
	I0323 23:25:39.032544  401618 start.go:856] validating driver "docker" against &{Name:pause-574316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:pause-574316 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-
provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0323 23:25:39.032682  401618 start.go:867] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0323 23:25:39.032779  401618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0323 23:25:39.165143  401618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2023-03-23 23:25:39.15591748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0323 23:25:39.165990  401618 cni.go:84] Creating CNI manager for ""
	I0323 23:25:39.166023  401618 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0323 23:25:39.166041  401618 start_flags.go:319] config:
	{Name:pause-574316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:pause-574316 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] Custo
mAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0323 23:25:39.169386  401618 out.go:177] * Starting control plane node pause-574316 in cluster pause-574316
	I0323 23:25:39.171375  401618 cache.go:120] Beginning downloading kic base image for docker with docker
	I0323 23:25:39.173869  401618 out.go:177] * Pulling base image ...
	I0323 23:25:39.175359  401618 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0323 23:25:39.175382  401618 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0323 23:25:39.175404  401618 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4
	I0323 23:25:39.175418  401618 cache.go:57] Caching tarball of preloaded images
	I0323 23:25:39.175518  401618 preload.go:174] Found /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0323 23:25:39.175533  401618 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.3 on docker
	I0323 23:25:39.175661  401618 profile.go:148] Saving config to /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/config.json ...
	I0323 23:25:39.257225  401618 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0323 23:25:39.257257  401618 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0323 23:25:39.257281  401618 cache.go:193] Successfully downloaded all kic artifacts
	I0323 23:25:39.257319  401618 start.go:364] acquiring machines lock for pause-574316: {Name:mk398c58b4397d996ea922b4a13a9404b26b4f2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0323 23:25:39.257463  401618 start.go:368] acquired machines lock for "pause-574316" in 91.492µs
	I0323 23:25:39.257489  401618 start.go:96] Skipping create...Using existing machine configuration
	I0323 23:25:39.257500  401618 fix.go:55] fixHost starting: 
	I0323 23:25:39.257789  401618 cli_runner.go:164] Run: docker container inspect pause-574316 --format={{.State.Status}}
	I0323 23:25:39.351866  401618 fix.go:103] recreateIfNeeded on pause-574316: state=Running err=<nil>
	W0323 23:25:39.351895  401618 fix.go:129] unexpected machine state, will restart: <nil>
	I0323 23:25:39.354221  401618 out.go:177] * Updating the running docker "pause-574316" container ...
	I0323 23:25:39.355859  401618 machine.go:88] provisioning docker machine ...
	I0323 23:25:39.355899  401618 ubuntu.go:169] provisioning hostname "pause-574316"
	I0323 23:25:39.355948  401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
	I0323 23:25:39.433008  401618 main.go:141] libmachine: Using SSH client type: native
	I0323 23:25:39.433738  401618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 32989 <nil> <nil>}
	I0323 23:25:39.433769  401618 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-574316 && echo "pause-574316" | sudo tee /etc/hostname
	I0323 23:25:39.583955  401618 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-574316
	
	I0323 23:25:39.584040  401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
	I0323 23:25:39.667029  401618 main.go:141] libmachine: Using SSH client type: native
	I0323 23:25:39.667707  401618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 32989 <nil> <nil>}
	I0323 23:25:39.667745  401618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-574316' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-574316/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-574316' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0323 23:25:39.809717  401618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0323 23:25:39.809746  401618 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16143-62012/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-62012/.minikube}
	I0323 23:25:39.809766  401618 ubuntu.go:177] setting up certificates
	I0323 23:25:39.809775  401618 provision.go:83] configureAuth start
	I0323 23:25:39.809825  401618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-574316
	I0323 23:25:39.913040  401618 provision.go:138] copyHostCerts
	I0323 23:25:39.913126  401618 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem, removing ...
	I0323 23:25:39.913138  401618 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem
	I0323 23:25:39.913218  401618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem (1123 bytes)
	I0323 23:25:39.913364  401618 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem, removing ...
	I0323 23:25:39.913373  401618 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem
	I0323 23:25:39.913465  401618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem (1675 bytes)
	I0323 23:25:39.913573  401618 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem, removing ...
	I0323 23:25:39.913594  401618 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem
	I0323 23:25:39.913636  401618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem (1078 bytes)
	I0323 23:25:39.913752  401618 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem org=jenkins.pause-574316 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-574316]
	I0323 23:25:39.987714  401618 provision.go:172] copyRemoteCerts
	I0323 23:25:39.987781  401618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0323 23:25:39.987815  401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
	I0323 23:25:40.075925  401618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/pause-574316/id_rsa Username:docker}
	I0323 23:25:40.186412  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0323 23:25:40.208578  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0323 23:25:40.227384  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0323 23:25:40.245282  401618 provision.go:86] duration metric: configureAuth took 435.487257ms
	I0323 23:25:40.245311  401618 ubuntu.go:193] setting minikube options for container-runtime
	I0323 23:25:40.245622  401618 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:25:40.245673  401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
	I0323 23:25:40.321283  401618 main.go:141] libmachine: Using SSH client type: native
	I0323 23:25:40.321744  401618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 32989 <nil> <nil>}
	I0323 23:25:40.321760  401618 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0323 23:25:40.437792  401618 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0323 23:25:40.437825  401618 ubuntu.go:71] root file system type: overlay
	I0323 23:25:40.437981  401618 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0323 23:25:40.438064  401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
	I0323 23:25:40.517587  401618 main.go:141] libmachine: Using SSH client type: native
	I0323 23:25:40.518003  401618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 32989 <nil> <nil>}
	I0323 23:25:40.518064  401618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0323 23:25:40.642671  401618 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0323 23:25:40.642784  401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
	I0323 23:25:40.716391  401618 main.go:141] libmachine: Using SSH client type: native
	I0323 23:25:40.716801  401618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 32989 <nil> <nil>}
	I0323 23:25:40.716821  401618 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0323 23:25:40.837842  401618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0323 23:25:40.837871  401618 machine.go:91] provisioned docker machine in 1.481993353s
	I0323 23:25:40.837885  401618 start.go:300] post-start starting for "pause-574316" (driver="docker")
	I0323 23:25:40.837894  401618 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0323 23:25:40.837987  401618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0323 23:25:40.838048  401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
	I0323 23:25:40.912252  401618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/pause-574316/id_rsa Username:docker}
	I0323 23:25:40.997437  401618 ssh_runner.go:195] Run: cat /etc/os-release
	I0323 23:25:41.000457  401618 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0323 23:25:41.000490  401618 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0323 23:25:41.000504  401618 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0323 23:25:41.000512  401618 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0323 23:25:41.000522  401618 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-62012/.minikube/addons for local assets ...
	I0323 23:25:41.000592  401618 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-62012/.minikube/files for local assets ...
	I0323 23:25:41.000702  401618 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem -> 687022.pem in /etc/ssl/certs
	I0323 23:25:41.000829  401618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0323 23:25:41.008074  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem --> /etc/ssl/certs/687022.pem (1708 bytes)
	I0323 23:25:41.026484  401618 start.go:303] post-start completed in 188.579327ms
	I0323 23:25:41.026573  401618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0323 23:25:41.026619  401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
	I0323 23:25:41.099088  401618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/pause-574316/id_rsa Username:docker}
	I0323 23:25:41.186783  401618 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0323 23:25:41.191463  401618 fix.go:57] fixHost completed within 1.933951947s
	I0323 23:25:41.191505  401618 start.go:83] releasing machines lock for "pause-574316", held for 1.934014729s
	I0323 23:25:41.191587  401618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-574316
	I0323 23:25:41.262808  401618 ssh_runner.go:195] Run: cat /version.json
	I0323 23:25:41.262882  401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
	I0323 23:25:41.262888  401618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0323 23:25:41.262963  401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
	I0323 23:25:41.340837  401618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/pause-574316/id_rsa Username:docker}
	I0323 23:25:41.348422  401618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/pause-574316/id_rsa Username:docker}
	I0323 23:25:41.460233  401618 ssh_runner.go:195] Run: systemctl --version
	I0323 23:25:41.464249  401618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0323 23:25:41.468128  401618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0323 23:25:41.484554  401618 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0323 23:25:41.484643  401618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0323 23:25:41.492572  401618 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0323 23:25:41.492615  401618 start.go:481] detecting cgroup driver to use...
	I0323 23:25:41.492654  401618 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0323 23:25:41.492777  401618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0323 23:25:41.507375  401618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0323 23:25:41.516258  401618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0323 23:25:41.524309  401618 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0323 23:25:41.524358  401618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0323 23:25:41.532176  401618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0323 23:25:41.540045  401618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0323 23:25:41.548055  401618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0323 23:25:41.556320  401618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0323 23:25:41.563881  401618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0323 23:25:41.573263  401618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0323 23:25:41.581440  401618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0323 23:25:41.590053  401618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0323 23:25:41.736075  401618 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0323 23:25:47.710149  401618 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (5.974028883s)
	I0323 23:25:47.710184  401618 start.go:481] detecting cgroup driver to use...
	I0323 23:25:47.710218  401618 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0323 23:25:47.710267  401618 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0323 23:25:47.752166  401618 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0323 23:25:47.752234  401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0323 23:25:47.763692  401618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0323 23:25:47.781831  401618 ssh_runner.go:195] Run: which cri-dockerd
	I0323 23:25:47.785130  401618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0323 23:25:47.793988  401618 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0323 23:25:47.859058  401618 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0323 23:25:48.083211  401618 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0323 23:25:48.318629  401618 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0323 23:25:48.318674  401618 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0323 23:25:48.362810  401618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0323 23:25:48.491183  401618 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0323 23:25:49.242171  401618 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0323 23:25:49.338611  401618 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0323 23:25:49.431988  401618 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0323 23:25:49.518478  401618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0323 23:25:49.606812  401618 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0323 23:25:49.622196  401618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0323 23:25:49.768476  401618 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0323 23:25:49.876462  401618 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0323 23:25:49.876547  401618 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0323 23:25:49.881148  401618 start.go:549] Will wait 60s for crictl version
	I0323 23:25:49.881199  401618 ssh_runner.go:195] Run: which crictl
	I0323 23:25:49.884031  401618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0323 23:25:49.920446  401618 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0323 23:25:49.920502  401618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0323 23:25:49.950337  401618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0323 23:25:49.978007  401618 out.go:204] * Preparing Kubernetes v1.26.3 on Docker 23.0.1 ...
	I0323 23:25:49.978099  401618 cli_runner.go:164] Run: docker network inspect pause-574316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0323 23:25:50.055471  401618 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0323 23:25:50.059400  401618 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0323 23:25:50.059459  401618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0323 23:25:50.081828  401618 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0323 23:25:50.081859  401618 docker.go:569] Images already preloaded, skipping extraction
	I0323 23:25:50.081951  401618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0323 23:25:50.105875  401618 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0323 23:25:50.105904  401618 cache_images.go:84] Images are preloaded, skipping loading
	I0323 23:25:50.105963  401618 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0323 23:25:50.137759  401618 cni.go:84] Creating CNI manager for ""
	I0323 23:25:50.137785  401618 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0323 23:25:50.137803  401618 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0323 23:25:50.137818  401618 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-574316 NodeName:pause-574316 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0323 23:25:50.137971  401618 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-574316"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0323 23:25:50.138035  401618 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=pause-574316 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.3 ClusterName:pause-574316 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0323 23:25:50.138081  401618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
	I0323 23:25:50.146095  401618 binaries.go:44] Found k8s binaries, skipping transfer
	I0323 23:25:50.146155  401618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0323 23:25:50.153058  401618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0323 23:25:50.166207  401618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0323 23:25:50.179847  401618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2088 bytes)
	I0323 23:25:50.195400  401618 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0323 23:25:50.199342  401618 certs.go:56] Setting up /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316 for IP: 192.168.67.2
	I0323 23:25:50.199375  401618 certs.go:186] acquiring lock for shared ca certs: {Name:mkbfcc9ac63a4724ffa0206ecd1910ff6424bfdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0323 23:25:50.199577  401618 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16143-62012/.minikube/ca.key
	I0323 23:25:50.199630  401618 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16143-62012/.minikube/proxy-client-ca.key
	I0323 23:25:50.199720  401618 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.key
	I0323 23:25:50.199802  401618 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/apiserver.key.c7fa3a9e
	I0323 23:25:50.199862  401618 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/proxy-client.key
	I0323 23:25:50.200017  401618 certs.go:401] found cert: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/home/jenkins/minikube-integration/16143-62012/.minikube/certs/68702.pem (1338 bytes)
	W0323 23:25:50.200062  401618 certs.go:397] ignoring /home/jenkins/minikube-integration/16143-62012/.minikube/certs/home/jenkins/minikube-integration/16143-62012/.minikube/certs/68702_empty.pem, impossibly tiny 0 bytes
	I0323 23:25:50.200076  401618 certs.go:401] found cert: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem (1679 bytes)
	I0323 23:25:50.200113  401618 certs.go:401] found cert: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem (1078 bytes)
	I0323 23:25:50.200149  401618 certs.go:401] found cert: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem (1123 bytes)
	I0323 23:25:50.200179  401618 certs.go:401] found cert: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem (1675 bytes)
	I0323 23:25:50.200238  401618 certs.go:401] found cert: /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem (1708 bytes)
	I0323 23:25:50.201014  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0323 23:25:50.221140  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0323 23:25:50.240905  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0323 23:25:50.260358  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0323 23:25:50.279345  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0323 23:25:50.299109  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0323 23:25:50.318589  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0323 23:25:50.336426  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0323 23:25:50.354322  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/certs/68702.pem --> /usr/share/ca-certificates/68702.pem (1338 bytes)
	I0323 23:25:50.370942  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem --> /usr/share/ca-certificates/687022.pem (1708 bytes)
	I0323 23:25:50.389751  401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0323 23:25:50.409492  401618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0323 23:25:50.422518  401618 ssh_runner.go:195] Run: openssl version
	I0323 23:25:50.428379  401618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0323 23:25:50.436420  401618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0323 23:25:50.439511  401618 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 23 22:56 /usr/share/ca-certificates/minikubeCA.pem
	I0323 23:25:50.439560  401618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0323 23:25:50.444172  401618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0323 23:25:50.450869  401618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68702.pem && ln -fs /usr/share/ca-certificates/68702.pem /etc/ssl/certs/68702.pem"
	I0323 23:25:50.458693  401618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68702.pem
	I0323 23:25:50.461629  401618 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 23 22:59 /usr/share/ca-certificates/68702.pem
	I0323 23:25:50.461676  401618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68702.pem
	I0323 23:25:50.466163  401618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/68702.pem /etc/ssl/certs/51391683.0"
	I0323 23:25:50.472629  401618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/687022.pem && ln -fs /usr/share/ca-certificates/687022.pem /etc/ssl/certs/687022.pem"
	I0323 23:25:50.480396  401618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/687022.pem
	I0323 23:25:50.483525  401618 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 23 22:59 /usr/share/ca-certificates/687022.pem
	I0323 23:25:50.483566  401618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/687022.pem
	I0323 23:25:50.488443  401618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/687022.pem /etc/ssl/certs/3ec20f2e.0"
	I0323 23:25:50.495824  401618 kubeadm.go:401] StartCluster: {Name:pause-574316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:pause-574316 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false st
orage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0323 23:25:50.496003  401618 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0323 23:25:50.516228  401618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0323 23:25:50.523528  401618 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0323 23:25:50.523546  401618 kubeadm.go:633] restartCluster start
	I0323 23:25:50.523593  401618 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0323 23:25:50.530367  401618 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0323 23:25:50.531385  401618 kubeconfig.go:92] found "pause-574316" server: "https://192.168.67.2:8443"
	I0323 23:25:50.533117  401618 kapi.go:59] client config for pause-574316: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.key", CAFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0323 23:25:50.534364  401618 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0323 23:25:50.541456  401618 api_server.go:165] Checking apiserver status ...
	I0323 23:25:50.541494  401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0323 23:25:50.549669  401618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0323 23:25:51.050390  401618 api_server.go:165] Checking apiserver status ...
	I0323 23:25:51.050468  401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0323 23:25:51.064294  401618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0323 23:25:51.550552  401618 api_server.go:165] Checking apiserver status ...
	I0323 23:25:51.550627  401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0323 23:25:51.561537  401618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6125/cgroup
	I0323 23:25:51.570527  401618 api_server.go:181] apiserver freezer: "7:freezer:/docker/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/kubepods/burstable/pode3f7a1eab53ec8fb091240de98bc1524/6a198df97e4bd33611868552786c34b16a1896b4a18709ad6eaa65e7486b5d20"
	I0323 23:25:51.570597  401618 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/kubepods/burstable/pode3f7a1eab53ec8fb091240de98bc1524/6a198df97e4bd33611868552786c34b16a1896b4a18709ad6eaa65e7486b5d20/freezer.state
	I0323 23:25:51.578093  401618 api_server.go:203] freezer state: "THAWED"
	I0323 23:25:51.578117  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:25:56.579297  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0323 23:25:56.579379  401618 retry.go:31] will retry after 281.453148ms: state is "Stopped"
	I0323 23:25:56.861828  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:01.862609  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0323 23:26:01.862661  401618 retry.go:31] will retry after 338.872544ms: state is "Stopped"
	I0323 23:26:02.202207  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:07.205679  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0323 23:26:07.205736  401618 api_server.go:165] Checking apiserver status ...
	I0323 23:26:07.205792  401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0323 23:26:07.219960  401618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6125/cgroup
	I0323 23:26:07.242746  401618 api_server.go:181] apiserver freezer: "7:freezer:/docker/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/kubepods/burstable/pode3f7a1eab53ec8fb091240de98bc1524/6a198df97e4bd33611868552786c34b16a1896b4a18709ad6eaa65e7486b5d20"
	I0323 23:26:07.242832  401618 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/kubepods/burstable/pode3f7a1eab53ec8fb091240de98bc1524/6a198df97e4bd33611868552786c34b16a1896b4a18709ad6eaa65e7486b5d20/freezer.state
	I0323 23:26:07.252982  401618 api_server.go:203] freezer state: "THAWED"
	I0323 23:26:07.253019  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:11.781791  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": read tcp 192.168.67.1:35480->192.168.67.2:8443: read: connection reset by peer
	I0323 23:26:11.781859  401618 retry.go:31] will retry after 287.188822ms: state is "Stopped"
	I0323 23:26:12.069213  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:12.069672  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0323 23:26:12.069713  401618 retry.go:31] will retry after 310.499489ms: state is "Stopped"
	I0323 23:26:12.381213  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:12.381698  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0323 23:26:12.381745  401618 retry.go:31] will retry after 327.791373ms: state is "Stopped"
	I0323 23:26:12.710265  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:12.710710  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0323 23:26:12.710752  401618 retry.go:31] will retry after 495.316645ms: state is "Stopped"
	I0323 23:26:13.206372  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:13.206805  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0323 23:26:13.206850  401618 retry.go:31] will retry after 589.309728ms: state is "Stopped"
	I0323 23:26:13.796739  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:13.797264  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0323 23:26:13.797314  401618 retry.go:31] will retry after 895.454418ms: state is "Stopped"
	I0323 23:26:14.692919  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:14.693369  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0323 23:26:14.693431  401618 retry.go:31] will retry after 1.067586945s: state is "Stopped"
	I0323 23:26:15.761447  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:15.761789  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0323 23:26:15.761829  401618 retry.go:31] will retry after 1.243332361s: state is "Stopped"
	I0323 23:26:17.005481  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:17.005938  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0323 23:26:17.005984  401618 retry.go:31] will retry after 1.422748895s: state is "Stopped"
	I0323 23:26:18.429483  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:18.429933  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0323 23:26:18.429980  401618 retry.go:31] will retry after 1.810935197s: state is "Stopped"
	I0323 23:26:20.241489  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:20.241958  401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0323 23:26:20.242012  401618 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0323 23:26:20.242022  401618 kubeadm.go:1120] stopping kube-system containers ...
	I0323 23:26:20.242169  401618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0323 23:26:20.273426  401618 docker.go:465] Stopping containers: [656b70fafbc2 2b7bc2ac835b 7ff3dcd747a3 d517e8e4d5d2 45416a5cd36b a9b1dc3910d9 6a198df97e4b 840b0c35d444 60c1dee0f178 80c388522552 f70a37494730 4b1c73f39f8c 7c4a71f1f0cd f2351c0cf203 7fed7e2ba6fe 9f27801249b0 b79bc8efd18f 52b133216226 933006561bf4 24f0fb4ace30 c71a79a234db c4b287ab62a2 f61f5c7340ec 03d421288ded 6da34435e995 f14a1f114c0b 8dd03effe021 37b991db5f35 b23873b32bda c5c0072529d3]
	I0323 23:26:20.273517  401618 ssh_runner.go:195] Run: docker stop 656b70fafbc2 2b7bc2ac835b 7ff3dcd747a3 d517e8e4d5d2 45416a5cd36b a9b1dc3910d9 6a198df97e4b 840b0c35d444 60c1dee0f178 80c388522552 f70a37494730 4b1c73f39f8c 7c4a71f1f0cd f2351c0cf203 7fed7e2ba6fe 9f27801249b0 b79bc8efd18f 52b133216226 933006561bf4 24f0fb4ace30 c71a79a234db c4b287ab62a2 f61f5c7340ec 03d421288ded 6da34435e995 f14a1f114c0b 8dd03effe021 37b991db5f35 b23873b32bda c5c0072529d3
	I0323 23:26:25.377843  401618 ssh_runner.go:235] Completed: docker stop 656b70fafbc2 2b7bc2ac835b 7ff3dcd747a3 d517e8e4d5d2 45416a5cd36b a9b1dc3910d9 6a198df97e4b 840b0c35d444 60c1dee0f178 80c388522552 f70a37494730 4b1c73f39f8c 7c4a71f1f0cd f2351c0cf203 7fed7e2ba6fe 9f27801249b0 b79bc8efd18f 52b133216226 933006561bf4 24f0fb4ace30 c71a79a234db c4b287ab62a2 f61f5c7340ec 03d421288ded 6da34435e995 f14a1f114c0b 8dd03effe021 37b991db5f35 b23873b32bda c5c0072529d3: (5.104279359s)
	I0323 23:26:25.377931  401618 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0323 23:26:25.436706  401618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0323 23:26:25.444321  401618 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 23 23:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Mar 23 23:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Mar 23 23:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 23 23:25 /etc/kubernetes/scheduler.conf
	
	I0323 23:26:25.444385  401618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0323 23:26:25.453011  401618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0323 23:26:25.465602  401618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0323 23:26:25.474584  401618 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0323 23:26:25.474642  401618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0323 23:26:25.488216  401618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0323 23:26:25.500429  401618 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0323 23:26:25.500488  401618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0323 23:26:25.509169  401618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0323 23:26:25.518649  401618 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0323 23:26:25.518678  401618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0323 23:26:25.622028  401618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0323 23:26:26.712524  401618 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090464066s)
	I0323 23:26:26.712561  401618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0323 23:26:26.926415  401618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0323 23:26:27.015749  401618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0323 23:26:27.108937  401618 api_server.go:51] waiting for apiserver process to appear ...
	I0323 23:26:27.109010  401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0323 23:26:27.138818  401618 api_server.go:71] duration metric: took 29.877914ms to wait for apiserver process to appear ...
	I0323 23:26:27.138852  401618 api_server.go:87] waiting for apiserver healthz status ...
	I0323 23:26:27.138865  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:30.702650  401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0323 23:26:30.702688  401618 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0323 23:26:31.203341  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:31.209521  401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0323 23:26:31.209555  401618 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0323 23:26:31.703040  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:31.711962  401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0323 23:26:31.711995  401618 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0323 23:26:32.203410  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:32.209766  401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0323 23:26:32.218798  401618 api_server.go:140] control plane version: v1.26.3
	I0323 23:26:32.218829  401618 api_server.go:130] duration metric: took 5.079969007s to wait for apiserver health ...
	I0323 23:26:32.218847  401618 cni.go:84] Creating CNI manager for ""
	I0323 23:26:32.218863  401618 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0323 23:26:32.221258  401618 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0323 23:26:32.223621  401618 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0323 23:26:32.233200  401618 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (565 bytes)
	I0323 23:26:32.247230  401618 system_pods.go:43] waiting for kube-system pods to appear ...
	I0323 23:26:32.258573  401618 system_pods.go:59] 7 kube-system pods found
	I0323 23:26:32.258599  401618 system_pods.go:61] "coredns-787d4945fb-2sw8v" [05fc3b9f-534f-4c25-ab9a-0f1ea4cb9014] Running
	I0323 23:26:32.258608  401618 system_pods.go:61] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0323 23:26:32.258615  401618 system_pods.go:61] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
	I0323 23:26:32.258619  401618 system_pods.go:61] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
	I0323 23:26:32.258624  401618 system_pods.go:61] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
	I0323 23:26:32.258629  401618 system_pods.go:61] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
	I0323 23:26:32.258633  401618 system_pods.go:61] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
	I0323 23:26:32.258638  401618 system_pods.go:74] duration metric: took 11.390377ms to wait for pod list to return data ...
	I0323 23:26:32.258647  401618 node_conditions.go:102] verifying NodePressure condition ...
	I0323 23:26:32.262117  401618 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0323 23:26:32.262137  401618 node_conditions.go:123] node cpu capacity is 8
	I0323 23:26:32.262149  401618 node_conditions.go:105] duration metric: took 3.492134ms to run NodePressure ...
	I0323 23:26:32.262169  401618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0323 23:26:32.577460  401618 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0323 23:26:32.582902  401618 retry.go:31] will retry after 323.760518ms: kubelet not initialised
	I0323 23:26:32.911740  401618 kubeadm.go:784] kubelet initialised
	I0323 23:26:32.911768  401618 kubeadm.go:785] duration metric: took 334.279613ms waiting for restarted kubelet to initialise ...
	I0323 23:26:32.911781  401618 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0323 23:26:32.917306  401618 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-2sw8v" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:32.923785  401618 pod_ready.go:92] pod "coredns-787d4945fb-2sw8v" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:32.923807  401618 pod_ready.go:81] duration metric: took 6.468377ms waiting for pod "coredns-787d4945fb-2sw8v" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:32.923819  401618 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:34.935968  401618 pod_ready.go:102] pod "coredns-787d4945fb-lljqk" in "kube-system" namespace has status "Ready":"False"
	I0323 23:26:37.435598  401618 pod_ready.go:92] pod "coredns-787d4945fb-lljqk" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:37.435626  401618 pod_ready.go:81] duration metric: took 4.511800496s waiting for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:37.435639  401618 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:39.446424  401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
	I0323 23:26:41.447016  401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
	I0323 23:26:43.946954  401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
	I0323 23:26:44.447057  401618 pod_ready.go:92] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:44.447087  401618 pod_ready.go:81] duration metric: took 7.011439342s waiting for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:44.447102  401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:44.452104  401618 pod_ready.go:92] pod "kube-apiserver-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:44.452122  401618 pod_ready.go:81] duration metric: took 5.012337ms waiting for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:44.452131  401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.154244  401618 pod_ready.go:92] pod "kube-controller-manager-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:45.154286  401618 pod_ready.go:81] duration metric: took 702.146362ms waiting for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.154300  401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.161861  401618 pod_ready.go:92] pod "kube-proxy-lnk2t" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:45.161889  401618 pod_ready.go:81] duration metric: took 7.580234ms waiting for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.161903  401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.166566  401618 pod_ready.go:92] pod "kube-scheduler-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:45.166596  401618 pod_ready.go:81] duration metric: took 4.684396ms waiting for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.166605  401618 pod_ready.go:38] duration metric: took 12.254811598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0323 23:26:45.166630  401618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0323 23:26:45.174654  401618 ops.go:34] apiserver oom_adj: -16
	I0323 23:26:45.174677  401618 kubeadm.go:637] restartCluster took 54.651125652s
	I0323 23:26:45.174685  401618 kubeadm.go:403] StartCluster complete in 54.678873105s
	I0323 23:26:45.174705  401618 settings.go:142] acquiring lock: {Name:mk2143e7b36672d551bcc6ff6483f31f704df2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0323 23:26:45.174775  401618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-62012/kubeconfig
	I0323 23:26:45.175905  401618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-62012/kubeconfig: {Name:mkedf19780b2d3cba14a58c9ca6a4f1d32104ee0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0323 23:26:45.213579  401618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0323 23:26:45.213933  401618 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:26:45.213472  401618 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0323 23:26:45.214148  401618 kapi.go:59] client config for pause-574316: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.key", CAFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0323 23:26:45.414715  401618 out.go:177] * Enabled addons: 
	I0323 23:26:45.217242  401618 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-574316" context rescaled to 1 replicas
	I0323 23:26:45.430053  401618 addons.go:499] enable addons completed in 216.595091ms: enabled=[]
	I0323 23:26:45.430069  401618 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0323 23:26:45.436198  401618 out.go:177] * Verifying Kubernetes components...
	I0323 23:26:45.436358  401618 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0323 23:26:45.446881  401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0323 23:26:45.460908  401618 node_ready.go:35] waiting up to 6m0s for node "pause-574316" to be "Ready" ...
	I0323 23:26:45.463792  401618 node_ready.go:49] node "pause-574316" has status "Ready":"True"
	I0323 23:26:45.463814  401618 node_ready.go:38] duration metric: took 2.869699ms waiting for node "pause-574316" to be "Ready" ...
	I0323 23:26:45.463823  401618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0323 23:26:45.468648  401618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.645139  401618 pod_ready.go:92] pod "coredns-787d4945fb-lljqk" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:45.645160  401618 pod_ready.go:81] duration metric: took 176.488938ms waiting for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.645170  401618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.045231  401618 pod_ready.go:92] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:46.045260  401618 pod_ready.go:81] duration metric: took 400.083583ms waiting for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.045274  401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.444173  401618 pod_ready.go:92] pod "kube-apiserver-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:46.444194  401618 pod_ready.go:81] duration metric: took 398.912915ms waiting for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.444204  401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.844571  401618 pod_ready.go:92] pod "kube-controller-manager-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:46.844592  401618 pod_ready.go:81] duration metric: took 400.382744ms waiting for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.844602  401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:47.244514  401618 pod_ready.go:92] pod "kube-proxy-lnk2t" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:47.244538  401618 pod_ready.go:81] duration metric: took 399.927693ms waiting for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:47.244548  401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:47.644184  401618 pod_ready.go:92] pod "kube-scheduler-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:47.644203  401618 pod_ready.go:81] duration metric: took 399.648889ms waiting for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:47.644210  401618 pod_ready.go:38] duration metric: took 2.180378997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0323 23:26:47.644231  401618 api_server.go:51] waiting for apiserver process to appear ...
	I0323 23:26:47.644265  401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0323 23:26:47.660462  401618 api_server.go:71] duration metric: took 2.230343116s to wait for apiserver process to appear ...
	I0323 23:26:47.660489  401618 api_server.go:87] waiting for apiserver healthz status ...
	I0323 23:26:47.660508  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:47.667464  401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0323 23:26:47.668285  401618 api_server.go:140] control plane version: v1.26.3
	I0323 23:26:47.668303  401618 api_server.go:130] duration metric: took 7.807644ms to wait for apiserver health ...
	I0323 23:26:47.668310  401618 system_pods.go:43] waiting for kube-system pods to appear ...
	I0323 23:26:47.847116  401618 system_pods.go:59] 6 kube-system pods found
	I0323 23:26:47.847153  401618 system_pods.go:61] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running
	I0323 23:26:47.847161  401618 system_pods.go:61] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
	I0323 23:26:47.847168  401618 system_pods.go:61] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
	I0323 23:26:47.847175  401618 system_pods.go:61] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
	I0323 23:26:47.847181  401618 system_pods.go:61] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
	I0323 23:26:47.847187  401618 system_pods.go:61] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
	I0323 23:26:47.847193  401618 system_pods.go:74] duration metric: took 178.878592ms to wait for pod list to return data ...
	I0323 23:26:47.847201  401618 default_sa.go:34] waiting for default service account to be created ...
	I0323 23:26:48.044586  401618 default_sa.go:45] found service account: "default"
	I0323 23:26:48.044616  401618 default_sa.go:55] duration metric: took 197.409776ms for default service account to be created ...
	I0323 23:26:48.044630  401618 system_pods.go:116] waiting for k8s-apps to be running ...
	I0323 23:26:48.247931  401618 system_pods.go:86] 6 kube-system pods found
	I0323 23:26:48.247963  401618 system_pods.go:89] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running
	I0323 23:26:48.247974  401618 system_pods.go:89] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
	I0323 23:26:48.247980  401618 system_pods.go:89] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
	I0323 23:26:48.247986  401618 system_pods.go:89] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
	I0323 23:26:48.247991  401618 system_pods.go:89] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
	I0323 23:26:48.247999  401618 system_pods.go:89] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
	I0323 23:26:48.248007  401618 system_pods.go:126] duration metric: took 203.371205ms to wait for k8s-apps to be running ...
	I0323 23:26:48.248015  401618 system_svc.go:44] waiting for kubelet service to be running ....
	I0323 23:26:48.248065  401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0323 23:26:48.258927  401618 system_svc.go:56] duration metric: took 10.902515ms WaitForService to wait for kubelet.
	I0323 23:26:48.258954  401618 kubeadm.go:578] duration metric: took 2.828842444s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0323 23:26:48.258976  401618 node_conditions.go:102] verifying NodePressure condition ...
	I0323 23:26:48.449583  401618 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0323 23:26:48.449608  401618 node_conditions.go:123] node cpu capacity is 8
	I0323 23:26:48.449620  401618 node_conditions.go:105] duration metric: took 190.638556ms to run NodePressure ...
	I0323 23:26:48.449633  401618 start.go:228] waiting for startup goroutines ...
	I0323 23:26:48.449641  401618 start.go:233] waiting for cluster config update ...
	I0323 23:26:48.449652  401618 start.go:242] writing updated cluster config ...
	I0323 23:26:48.450019  401618 ssh_runner.go:195] Run: rm -f paused
	I0323 23:26:48.534780  401618 start.go:554] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0)
	I0323 23:26:48.538018  401618 out.go:177] * Done! kubectl is now configured to use "pause-574316" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-574316
helpers_test.go:235: (dbg) docker inspect pause-574316:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb",
	        "Created": "2023-03-23T23:25:04.583396388Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 390898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-23T23:25:05.007909282Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/hosts",
	        "LogPath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb-json.log",
	        "Name": "/pause-574316",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-574316:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-574316",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47-init/diff:/var/lib/docker/overlay2/d356d443959743e8c5ec1e688b0ccaccd2483fd24991ca327095d1ea51dadd79/diff:/var/lib/docker/overlay2/dd1855d68604dc5432757610d41f6488e2cf65b7ade63d0ac4dd50e3cb700545/diff:/var/lib/docker/overlay2/3ae5a9ac34ca4f4036f376d3f7ee1e6d806107b6ba140eee2af2df3088fe2af4/diff:/var/lib/docker/overlay2/a88a7a03b1dddb065d2da925165770d1982de0fb6388d7798dec4a6c996388ed/diff:/var/lib/docker/overlay2/11e0cdbbdfb5d84e0d99a3d4a7693f825097d37baa31784b182606407b254347/diff:/var/lib/docker/overlay2/f3679d076f087c60feb261250bae0ef050d7ed7a8876697b61f4e74260ac5c25/diff:/var/lib/docker/overlay2/3a9213ab7d98194272e65090b79370f92e0fed3b68466ca89c2fce6cc06bee37/diff:/var/lib/docker/overlay2/c7e7b51e4ed37e163c31a7a2769a396f00a3a46bbe043bb3d74144e3d7dbdf4b/diff:/var/lib/docker/overlay2/a5a37da3c24f5ba9b69245b491d59fa7f875d4bf22ab2d3b4fe2e0480245836e/diff:/var/lib/docker/overlay2/f36025
f30104b76500045a0755939ab273914eecce2e91f0541c32de5325546f/diff:/var/lib/docker/overlay2/ef9ccd83ee71ed9d46782a820551dbda8865609796f631a741766fab9be9c04b/diff:/var/lib/docker/overlay2/e105b68b5b16f55e25547056d8ce228bdac36d93107fd4a3a78c8b026fbe0140/diff:/var/lib/docker/overlay2/75ca52704ffd583bb6fbed231278a5c352311cb4dee88f8b731377a47cdf43cd/diff:/var/lib/docker/overlay2/70a153c20f330aaea42285756d01aeb9a3e45e8909ea0b266c7d189438588e4b/diff:/var/lib/docker/overlay2/e07683b025df1da95650fadc2612b6df0024b6d4ab531cf439bb426bb94dd7c6/diff:/var/lib/docker/overlay2/a9c09db98b0de89a8bd85bb42c47585ec8dd924dfea9913e0e1e581771cb76db/diff:/var/lib/docker/overlay2/467577b0b0b8cb64beff8ef36e7da084fb7cddcdea88ced35ada883720038870/diff:/var/lib/docker/overlay2/89ecada524594426b58db802e9a64eff841e5a0dda6609f65ba80c77dc71866e/diff:/var/lib/docker/overlay2/d2e226af46510168fcd51d532ca7a03e77c9d9eb5253b85afd78b26e7b839180/diff:/var/lib/docker/overlay2/e7c1552e27888c5d4d72be70f7b4614ac96872e390e99ad721f043fa28cdc212/diff:/var/lib/d
ocker/overlay2/3074211fc4276144c82302477aac25cc2363357462b8212747bf9a6abdb179b8/diff:/var/lib/docker/overlay2/2f0eed0a121e12185ea49a07f0a026b7cd3add1c64e943d8f00609db9cb06035/diff:/var/lib/docker/overlay2/efa9237fe1d3ed78c6d7939b6d7a46778b6c3851395039e00da7e7ba1c07743d/diff:/var/lib/docker/overlay2/0ca055233446f0ea58f8b702a09b991f77ae9c6f1a338762761848f3a4b12d4e/diff:/var/lib/docker/overlay2/aa7036e406ea8fcd3317c56097ff3b2227796276b2a8ab2f3f7103fed4dfa3b5/diff:/var/lib/docker/overlay2/2f3123bc47bc73bed1b1f7f75675e13e493ca4c8e4f5c4cb662aae58d9373cca/diff:/var/lib/docker/overlay2/1275037c371fbe052f7ca3e9c640764633c72ba9f3d6954b012d34cae8b5d69d/diff:/var/lib/docker/overlay2/7b9c1ddebbcba2b26d07bd7fba9c0fd87ce195be38c2a75f219ac7de57f85b3f/diff:/var/lib/docker/overlay2/2b39bb0f285174bfa621ed101af05ba3552825ab700a73135af1e8b8d7f0bb81/diff:/var/lib/docker/overlay2/643ab8ec872c6defa175401a06dd4a300105c4061619e41059a39a3ee35e3d40/diff:/var/lib/docker/overlay2/713ee57325a771a6a041c255726b832978f929eb1147c72212d96dd7dde
734b2/diff:/var/lib/docker/overlay2/19c1f1f71db682b75e904ad1c7d909f372d24486542012874e578917dc9a9bdf/diff:/var/lib/docker/overlay2/d26fed6403eddd78cf74be1d4a1f4012e1edccb465491f947e4746d92cebcd56/diff:/var/lib/docker/overlay2/0086cdc0bd9c0e4bd086d59a3944cac9d08674d00c80fa77d1f9faa935a5fb19/diff:/var/lib/docker/overlay2/9e14b9f084a1ea7826ee394f169e32a19b56fa135bde5da69486094355c778bb/diff:/var/lib/docker/overlay2/92af9bb2d1b59e9a45cd00af02a78ed7edab34388b268ad30cf749708e273ee8/diff:/var/lib/docker/overlay2/b13dcd677cb58d34d216059052299c900b1728fe3d46ae29cdf0f9a6991696ac/diff:/var/lib/docker/overlay2/30ba19dfbdf89b50aa26fe1695664407f059e1a354830d1d0363128794c81c8f/diff:/var/lib/docker/overlay2/0a91cb0450bc46b302d1b3518574e94a65ab366928b7b67d4dd446e682a14338/diff:/var/lib/docker/overlay2/0b3c4aae10bf80ea7c918fa052ad5ed468c2ebe01aa2f0658bc20304d1f6b07e/diff:/var/lib/docker/overlay2/9602ed727f176a29d28ed2d2045ad3c93f4ec63578399744c69db3d3057f1ed7/diff:/var/lib/docker/overlay2/33399f037b75aa41b061c2f9330cd6f041c290
9051f6ad5b09141a0346202db9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-574316",
	                "Source": "/var/lib/docker/volumes/pause-574316/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-574316",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-574316",
	                "name.minikube.sigs.k8s.io": "pause-574316",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "83727ed535e639dbb7b60a28c289ec43475eb83a2bfc731da6a7d8b3710be5ba",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32989"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32988"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/83727ed535e6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-574316": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "973cf0ca8459",
	                        "pause-574316"
	                    ],
	                    "NetworkID": "2400bfbdd9cf00f3450521e73ae0be02c2bb9e5678c8bce35f9e0dc4ced8fa23",
	                    "EndpointID": "1af4d5eb5080f4897840d3dd79c7fcfc8ac3d8dcb7665dd57389ff515a84a05e",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-574316 -n pause-574316
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-574316 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-574316 logs -n 25: (1.222950807s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl cat kubelet                                |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl status docker --all                        |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl cat docker                                 |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /etc/docker/daemon.json                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo docker                         | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | system info                                          |                          |         |         |                     |                     |
	| start   | -p force-systemd-env-286741                          | force-systemd-env-286741 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | --memory=2048                                        |                          |         |         |                     |                     |
	|         | --alsologtostderr                                    |                          |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                          |         |         |                     |                     |
	|         | --container-runtime=docker                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl status cri-docker                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | cri-dockerd --version                                |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl status containerd                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl cat containerd                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | containerd config dump                               |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl status crio --all                          |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo find                           | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo crio                           | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | config                                               |                          |         |         |                     |                     |
	| delete  | -p cilium-452361                                     | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | 23 Mar 23 23:26 UTC |
	| start   | -p old-k8s-version-063647                            | old-k8s-version-063647   | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | --memory=2200                                        |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                          |         |         |                     |                     |
	|         | --kvm-network=default                                |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                          |         |         |                     |                     |
	|         | --keep-context=false                                 |                          |         |         |                     |                     |
	|         | --driver=docker                                      |                          |         |         |                     |                     |
	|         | --container-runtime=docker                           |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                          |         |         |                     |                     |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/23 23:26:40
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0323 23:26:40.042149  428061 out.go:296] Setting OutFile to fd 1 ...
	I0323 23:26:40.042248  428061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:26:40.042257  428061 out.go:309] Setting ErrFile to fd 2...
	I0323 23:26:40.042261  428061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:26:40.042366  428061 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16143-62012/.minikube/bin
	I0323 23:26:40.042954  428061 out.go:303] Setting JSON to false
	I0323 23:26:40.047193  428061 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7746,"bootTime":1679606254,"procs":1211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0323 23:26:40.047254  428061 start.go:135] virtualization: kvm guest
	I0323 23:26:40.049796  428061 out.go:177] * [old-k8s-version-063647] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0323 23:26:40.051284  428061 out.go:177]   - MINIKUBE_LOCATION=16143
	I0323 23:26:40.051309  428061 notify.go:220] Checking for updates...
	I0323 23:26:40.052905  428061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0323 23:26:40.054785  428061 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	I0323 23:26:40.056430  428061 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	I0323 23:26:40.058083  428061 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0323 23:26:40.059646  428061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0323 23:26:40.061783  428061 config.go:182] Loaded profile config "force-systemd-env-286741": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:26:40.061882  428061 config.go:182] Loaded profile config "kubernetes-upgrade-120624": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-beta.0
	I0323 23:26:40.062033  428061 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:26:40.062098  428061 driver.go:365] Setting default libvirt URI to qemu:///system
	I0323 23:26:40.147368  428061 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0323 23:26:40.147472  428061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0323 23:26:40.295961  428061 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2023-03-23 23:26:40.275708441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0323 23:26:40.296057  428061 docker.go:294] overlay module found
	I0323 23:26:40.298752  428061 out.go:177] * Using the docker driver based on user configuration
	I0323 23:26:40.300448  428061 start.go:295] selected driver: docker
	I0323 23:26:40.300468  428061 start.go:856] validating driver "docker" against <nil>
	I0323 23:26:40.300482  428061 start.go:867] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0323 23:26:40.301339  428061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0323 23:26:40.438182  428061 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:42 SystemTime:2023-03-23 23:26:40.428586758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0323 23:26:40.438301  428061 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0323 23:26:40.438509  428061 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0323 23:26:40.441248  428061 out.go:177] * Using Docker driver with root privileges
	I0323 23:26:40.442932  428061 cni.go:84] Creating CNI manager for ""
	I0323 23:26:40.442974  428061 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0323 23:26:40.442984  428061 start_flags.go:319] config:
	{Name:old-k8s-version-063647 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-063647 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0323 23:26:40.444845  428061 out.go:177] * Starting control plane node old-k8s-version-063647 in cluster old-k8s-version-063647
	I0323 23:26:40.446536  428061 cache.go:120] Beginning downloading kic base image for docker with docker
	I0323 23:26:40.448053  428061 out.go:177] * Pulling base image ...
	I0323 23:26:40.449652  428061 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0323 23:26:40.449683  428061 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0323 23:26:40.449703  428061 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0323 23:26:40.449720  428061 cache.go:57] Caching tarball of preloaded images
	I0323 23:26:40.449803  428061 preload.go:174] Found /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0323 23:26:40.449814  428061 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0323 23:26:40.449923  428061 profile.go:148] Saving config to /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/config.json ...
	I0323 23:26:40.449948  428061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/config.json: {Name:mkd269866aecb4e0ebd7c80fae44792dc2e78f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0323 23:26:40.540045  428061 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0323 23:26:40.540081  428061 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0323 23:26:40.540105  428061 cache.go:193] Successfully downloaded all kic artifacts
	I0323 23:26:40.540144  428061 start.go:364] acquiring machines lock for old-k8s-version-063647: {Name:mk836ec8f4a8439e66a7c2c2dcb6074efc06d654 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0323 23:26:40.540267  428061 start.go:368] acquired machines lock for "old-k8s-version-063647" in 98.708µs
	I0323 23:26:40.540298  428061 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-063647 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-063647 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0323 23:26:40.540420  428061 start.go:125] createHost starting for "" (driver="docker")
	I0323 23:26:37.666420  360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0323 23:26:37.666756  360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0323 23:26:37.915164  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0323 23:26:37.934415  360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
	I0323 23:26:37.934495  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0323 23:26:37.954816  360910 logs.go:277] 1 containers: [a90d829451b2]
	I0323 23:26:37.954881  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0323 23:26:37.973222  360910 logs.go:277] 0 containers: []
	W0323 23:26:37.973245  360910 logs.go:279] No container was found matching "coredns"
	I0323 23:26:37.973298  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0323 23:26:37.992640  360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
	I0323 23:26:37.992731  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0323 23:26:38.012097  360910 logs.go:277] 1 containers: [333ad261cea4]
	I0323 23:26:38.012179  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0323 23:26:38.030328  360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
	I0323 23:26:38.030409  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0323 23:26:38.048993  360910 logs.go:277] 0 containers: []
	W0323 23:26:38.049024  360910 logs.go:279] No container was found matching "kindnet"
	I0323 23:26:38.049080  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0323 23:26:38.068667  360910 logs.go:277] 1 containers: [eac6b13c2df0]
	I0323 23:26:38.068707  360910 logs.go:123] Gathering logs for describe nodes ...
	I0323 23:26:38.068722  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0323 23:26:38.127007  360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0323 23:26:38.127040  360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
	I0323 23:26:38.127056  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
	I0323 23:26:38.147666  360910 logs.go:123] Gathering logs for dmesg ...
	I0323 23:26:38.147691  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0323 23:26:38.168212  360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
	I0323 23:26:38.168249  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
	I0323 23:26:38.197795  360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
	I0323 23:26:38.197836  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
	I0323 23:26:38.243949  360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
	I0323 23:26:38.243989  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
	I0323 23:26:38.264103  360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
	I0323 23:26:38.264130  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
	I0323 23:26:38.288660  360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
	I0323 23:26:38.288696  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
	I0323 23:26:38.363370  360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
	I0323 23:26:38.363403  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
	I0323 23:26:38.386060  360910 logs.go:123] Gathering logs for container status ...
	I0323 23:26:38.386089  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0323 23:26:38.418791  360910 logs.go:123] Gathering logs for kubelet ...
	I0323 23:26:38.418815  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0323 23:26:38.548713  360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
	I0323 23:26:38.548764  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
	I0323 23:26:38.579492  360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
	I0323 23:26:38.579537  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
	I0323 23:26:38.618692  360910 logs.go:123] Gathering logs for Docker ...
	I0323 23:26:38.618721  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0323 23:26:41.155209  360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0323 23:26:41.155664  360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0323 23:26:41.415055  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0323 23:26:41.434873  360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
	I0323 23:26:41.434945  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0323 23:26:41.455006  360910 logs.go:277] 1 containers: [a90d829451b2]
	I0323 23:26:41.455077  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0323 23:26:41.472882  360910 logs.go:277] 0 containers: []
	W0323 23:26:41.472906  360910 logs.go:279] No container was found matching "coredns"
	I0323 23:26:41.472950  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0323 23:26:41.491292  360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
	I0323 23:26:41.491390  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0323 23:26:39.446424  401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
	I0323 23:26:41.447016  401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
	I0323 23:26:39.280123  427158 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0323 23:26:39.280357  427158 start.go:159] libmachine.API.Create for "force-systemd-env-286741" (driver="docker")
	I0323 23:26:39.280387  427158 client.go:168] LocalClient.Create starting
	I0323 23:26:39.280458  427158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem
	I0323 23:26:39.280507  427158 main.go:141] libmachine: Decoding PEM data...
	I0323 23:26:39.280530  427158 main.go:141] libmachine: Parsing certificate...
	I0323 23:26:39.280594  427158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem
	I0323 23:26:39.280623  427158 main.go:141] libmachine: Decoding PEM data...
	I0323 23:26:39.280640  427158 main.go:141] libmachine: Parsing certificate...
	I0323 23:26:39.280974  427158 cli_runner.go:164] Run: docker network inspect force-systemd-env-286741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0323 23:26:39.354615  427158 cli_runner.go:211] docker network inspect force-systemd-env-286741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0323 23:26:39.354704  427158 network_create.go:281] running [docker network inspect force-systemd-env-286741] to gather additional debugging logs...
	I0323 23:26:39.354728  427158 cli_runner.go:164] Run: docker network inspect force-systemd-env-286741
	W0323 23:26:39.425557  427158 cli_runner.go:211] docker network inspect force-systemd-env-286741 returned with exit code 1
	I0323 23:26:39.425596  427158 network_create.go:284] error running [docker network inspect force-systemd-env-286741]: docker network inspect force-systemd-env-286741: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-286741 not found
	I0323 23:26:39.425628  427158 network_create.go:286] output of [docker network inspect force-systemd-env-286741]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-286741 not found
	
	** /stderr **
	I0323 23:26:39.425680  427158 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0323 23:26:39.503698  427158 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5c8e73f5a026 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0a:b3:fe:c5} reservation:<nil>}
	I0323 23:26:39.504676  427158 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76643bda3762 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f7:a2:b3:ec} reservation:<nil>}
	I0323 23:26:39.505710  427158 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2400bfbdd9cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a7:a4:76:86} reservation:<nil>}
	I0323 23:26:39.506685  427158 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cd4e78a8bfb8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:a6:13:91:cb} reservation:<nil>}
	I0323 23:26:39.507885  427158 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00175a3d0}
	I0323 23:26:39.507923  427158 network_create.go:123] attempt to create docker network force-systemd-env-286741 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0323 23:26:39.507984  427158 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-286741 force-systemd-env-286741
	I0323 23:26:39.624494  427158 network_create.go:107] docker network force-systemd-env-286741 192.168.85.0/24 created
	I0323 23:26:39.624528  427158 kic.go:117] calculated static IP "192.168.85.2" for the "force-systemd-env-286741" container
	I0323 23:26:39.624580  427158 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0323 23:26:39.699198  427158 cli_runner.go:164] Run: docker volume create force-systemd-env-286741 --label name.minikube.sigs.k8s.io=force-systemd-env-286741 --label created_by.minikube.sigs.k8s.io=true
	I0323 23:26:39.772552  427158 oci.go:103] Successfully created a docker volume force-systemd-env-286741
	I0323 23:26:39.772640  427158 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-286741-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-286741 --entrypoint /usr/bin/test -v force-systemd-env-286741:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
	I0323 23:26:40.396101  427158 oci.go:107] Successfully prepared a docker volume force-systemd-env-286741
	I0323 23:26:40.396169  427158 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0323 23:26:40.396201  427158 kic.go:190] Starting extracting preloaded images to volume ...
	I0323 23:26:40.396283  427158 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-286741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
	I0323 23:26:43.652059  427158 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-286741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (3.255698579s)
	I0323 23:26:43.652098  427158 kic.go:199] duration metric: took 3.255892 seconds to extract preloaded images to volume
	W0323 23:26:43.652249  427158 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0323 23:26:43.652340  427158 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0323 23:26:43.788292  427158 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-286741 --name force-systemd-env-286741 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-286741 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-286741 --network force-systemd-env-286741 --ip 192.168.85.2 --volume force-systemd-env-286741:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
	I0323 23:26:40.542931  428061 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0323 23:26:40.543143  428061 start.go:159] libmachine.API.Create for "old-k8s-version-063647" (driver="docker")
	I0323 23:26:40.543161  428061 client.go:168] LocalClient.Create starting
	I0323 23:26:40.543233  428061 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem
	I0323 23:26:40.543267  428061 main.go:141] libmachine: Decoding PEM data...
	I0323 23:26:40.543291  428061 main.go:141] libmachine: Parsing certificate...
	I0323 23:26:40.543363  428061 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem
	I0323 23:26:40.543394  428061 main.go:141] libmachine: Decoding PEM data...
	I0323 23:26:40.543409  428061 main.go:141] libmachine: Parsing certificate...
	I0323 23:26:40.543830  428061 cli_runner.go:164] Run: docker network inspect old-k8s-version-063647 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0323 23:26:40.622688  428061 cli_runner.go:211] docker network inspect old-k8s-version-063647 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0323 23:26:40.622796  428061 network_create.go:281] running [docker network inspect old-k8s-version-063647] to gather additional debugging logs...
	I0323 23:26:40.622825  428061 cli_runner.go:164] Run: docker network inspect old-k8s-version-063647
	W0323 23:26:40.691850  428061 cli_runner.go:211] docker network inspect old-k8s-version-063647 returned with exit code 1
	I0323 23:26:40.691881  428061 network_create.go:284] error running [docker network inspect old-k8s-version-063647]: docker network inspect old-k8s-version-063647: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-063647 not found
	I0323 23:26:40.691895  428061 network_create.go:286] output of [docker network inspect old-k8s-version-063647]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-063647 not found
	
	** /stderr **
	I0323 23:26:40.691971  428061 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0323 23:26:40.769117  428061 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5c8e73f5a026 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0a:b3:fe:c5} reservation:<nil>}
	I0323 23:26:40.769965  428061 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76643bda3762 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f7:a2:b3:ec} reservation:<nil>}
	I0323 23:26:40.770928  428061 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2400bfbdd9cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a7:a4:76:86} reservation:<nil>}
	I0323 23:26:40.771945  428061 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cd4e78a8bfb8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:a6:13:91:cb} reservation:<nil>}
	I0323 23:26:40.773155  428061 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f79741dc633b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:e0:82:cf:7a} reservation:<nil>}
	I0323 23:26:40.774473  428061 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014e36b0}
	I0323 23:26:40.774511  428061 network_create.go:123] attempt to create docker network old-k8s-version-063647 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0323 23:26:40.774584  428061 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-063647 old-k8s-version-063647
	I0323 23:26:40.898151  428061 network_create.go:107] docker network old-k8s-version-063647 192.168.94.0/24 created
	I0323 23:26:40.898189  428061 kic.go:117] calculated static IP "192.168.94.2" for the "old-k8s-version-063647" container
	I0323 23:26:40.898268  428061 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0323 23:26:40.974566  428061 cli_runner.go:164] Run: docker volume create old-k8s-version-063647 --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --label created_by.minikube.sigs.k8s.io=true
	I0323 23:26:41.045122  428061 oci.go:103] Successfully created a docker volume old-k8s-version-063647
	I0323 23:26:41.045212  428061 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-063647-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --entrypoint /usr/bin/test -v old-k8s-version-063647:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
	I0323 23:26:44.069733  428061 cli_runner.go:217] Completed: docker run --rm --name old-k8s-version-063647-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --entrypoint /usr/bin/test -v old-k8s-version-063647:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib: (3.024480313s)
	I0323 23:26:44.069768  428061 oci.go:107] Successfully prepared a docker volume old-k8s-version-063647
	I0323 23:26:44.069781  428061 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0323 23:26:44.069803  428061 kic.go:190] Starting extracting preloaded images to volume ...
	I0323 23:26:44.069874  428061 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-063647:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
	I0323 23:26:43.946954  401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
	I0323 23:26:44.447057  401618 pod_ready.go:92] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:44.447087  401618 pod_ready.go:81] duration metric: took 7.011439342s waiting for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:44.447102  401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:44.452104  401618 pod_ready.go:92] pod "kube-apiserver-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:44.452122  401618 pod_ready.go:81] duration metric: took 5.012337ms waiting for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:44.452131  401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.154244  401618 pod_ready.go:92] pod "kube-controller-manager-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:45.154286  401618 pod_ready.go:81] duration metric: took 702.146362ms waiting for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.154300  401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.161861  401618 pod_ready.go:92] pod "kube-proxy-lnk2t" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:45.161889  401618 pod_ready.go:81] duration metric: took 7.580234ms waiting for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.161903  401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.166566  401618 pod_ready.go:92] pod "kube-scheduler-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:45.166596  401618 pod_ready.go:81] duration metric: took 4.684396ms waiting for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.166605  401618 pod_ready.go:38] duration metric: took 12.254811598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0323 23:26:45.166630  401618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0323 23:26:45.174654  401618 ops.go:34] apiserver oom_adj: -16
	I0323 23:26:45.174677  401618 kubeadm.go:637] restartCluster took 54.651125652s
	I0323 23:26:45.174685  401618 kubeadm.go:403] StartCluster complete in 54.678873105s
	I0323 23:26:45.174705  401618 settings.go:142] acquiring lock: {Name:mk2143e7b36672d551bcc6ff6483f31f704df2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0323 23:26:45.174775  401618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-62012/kubeconfig
	I0323 23:26:45.175905  401618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-62012/kubeconfig: {Name:mkedf19780b2d3cba14a58c9ca6a4f1d32104ee0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0323 23:26:45.213579  401618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0323 23:26:45.213933  401618 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:26:45.213472  401618 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0323 23:26:45.214148  401618 kapi.go:59] client config for pause-574316: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.key", CAFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0323 23:26:45.414715  401618 out.go:177] * Enabled addons: 
	I0323 23:26:45.217242  401618 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-574316" context rescaled to 1 replicas
	I0323 23:26:45.430053  401618 addons.go:499] enable addons completed in 216.595091ms: enabled=[]
	I0323 23:26:45.430069  401618 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0323 23:26:45.436198  401618 out.go:177] * Verifying Kubernetes components...
	I0323 23:26:41.512784  360910 logs.go:277] 1 containers: [333ad261cea4]
	I0323 23:26:41.580770  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0323 23:26:41.604486  360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
	I0323 23:26:41.604573  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0323 23:26:41.623789  360910 logs.go:277] 0 containers: []
	W0323 23:26:41.623821  360910 logs.go:279] No container was found matching "kindnet"
	I0323 23:26:41.623896  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0323 23:26:41.644226  360910 logs.go:277] 1 containers: [eac6b13c2df0]
	I0323 23:26:41.644272  360910 logs.go:123] Gathering logs for kubelet ...
	I0323 23:26:41.644288  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0323 23:26:41.748676  360910 logs.go:123] Gathering logs for dmesg ...
	I0323 23:26:41.748714  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0323 23:26:41.768332  360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
	I0323 23:26:41.768367  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
	I0323 23:26:41.792311  360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
	I0323 23:26:41.792341  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
	I0323 23:26:41.830521  360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
	I0323 23:26:41.830556  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
	I0323 23:26:41.860609  360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
	I0323 23:26:41.860650  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
	I0323 23:26:41.932251  360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
	I0323 23:26:41.932290  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
	I0323 23:26:41.963057  360910 logs.go:123] Gathering logs for container status ...
	I0323 23:26:41.963098  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0323 23:26:41.993699  360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
	I0323 23:26:41.993742  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
	I0323 23:26:42.025209  360910 logs.go:123] Gathering logs for Docker ...
	I0323 23:26:42.025243  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0323 23:26:42.056243  360910 logs.go:123] Gathering logs for describe nodes ...
	I0323 23:26:42.056283  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0323 23:26:42.128632  360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0323 23:26:42.128657  360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
	I0323 23:26:42.128672  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
	I0323 23:26:42.163262  360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
	I0323 23:26:42.163298  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
	I0323 23:26:42.188287  360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
	I0323 23:26:42.188316  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
	I0323 23:26:44.714609  360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0323 23:26:44.715050  360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0323 23:26:44.915428  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0323 23:26:44.936310  360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
	I0323 23:26:44.936415  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0323 23:26:44.957324  360910 logs.go:277] 1 containers: [a90d829451b2]
	I0323 23:26:44.957387  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0323 23:26:44.980654  360910 logs.go:277] 0 containers: []
	W0323 23:26:44.980682  360910 logs.go:279] No container was found matching "coredns"
	I0323 23:26:44.980734  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0323 23:26:45.003148  360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
	I0323 23:26:45.003234  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0323 23:26:45.022249  360910 logs.go:277] 1 containers: [333ad261cea4]
	I0323 23:26:45.022323  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0323 23:26:45.040205  360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
	I0323 23:26:45.040282  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0323 23:26:45.057312  360910 logs.go:277] 0 containers: []
	W0323 23:26:45.057337  360910 logs.go:279] No container was found matching "kindnet"
	I0323 23:26:45.057385  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0323 23:26:45.080434  360910 logs.go:277] 1 containers: [eac6b13c2df0]
	I0323 23:26:45.080479  360910 logs.go:123] Gathering logs for dmesg ...
	I0323 23:26:45.080495  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0323 23:26:45.104865  360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
	I0323 23:26:45.104918  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
	I0323 23:26:45.133666  360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
	I0323 23:26:45.133710  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
	I0323 23:26:45.162931  360910 logs.go:123] Gathering logs for container status ...
	I0323 23:26:45.162970  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0323 23:26:45.202791  360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
	I0323 23:26:45.202825  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
	I0323 23:26:45.244277  360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
	I0323 23:26:45.244379  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
	I0323 23:26:45.282659  360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
	I0323 23:26:45.282742  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
	I0323 23:26:45.313254  360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
	I0323 23:26:45.313334  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
	I0323 23:26:45.336545  360910 logs.go:123] Gathering logs for Docker ...
	I0323 23:26:45.336594  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0323 23:26:45.377128  360910 logs.go:123] Gathering logs for kubelet ...
	I0323 23:26:45.377170  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0323 23:26:45.514087  360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
	I0323 23:26:45.514205  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
	I0323 23:26:45.592082  360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
	I0323 23:26:45.592121  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
	I0323 23:26:45.619139  360910 logs.go:123] Gathering logs for describe nodes ...
	I0323 23:26:45.619172  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0323 23:26:45.678335  360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0323 23:26:45.678389  360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
	I0323 23:26:45.678404  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
	I0323 23:26:45.436358  401618 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0323 23:26:45.446881  401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0323 23:26:45.460908  401618 node_ready.go:35] waiting up to 6m0s for node "pause-574316" to be "Ready" ...
	I0323 23:26:45.463792  401618 node_ready.go:49] node "pause-574316" has status "Ready":"True"
	I0323 23:26:45.463814  401618 node_ready.go:38] duration metric: took 2.869699ms waiting for node "pause-574316" to be "Ready" ...
	I0323 23:26:45.463823  401618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0323 23:26:45.468648  401618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.645139  401618 pod_ready.go:92] pod "coredns-787d4945fb-lljqk" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:45.645160  401618 pod_ready.go:81] duration metric: took 176.488938ms waiting for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.645170  401618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.045231  401618 pod_ready.go:92] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:46.045260  401618 pod_ready.go:81] duration metric: took 400.083583ms waiting for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.045274  401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.444173  401618 pod_ready.go:92] pod "kube-apiserver-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:46.444194  401618 pod_ready.go:81] duration metric: took 398.912915ms waiting for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.444204  401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.844571  401618 pod_ready.go:92] pod "kube-controller-manager-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:46.844592  401618 pod_ready.go:81] duration metric: took 400.382744ms waiting for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.844602  401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:47.244514  401618 pod_ready.go:92] pod "kube-proxy-lnk2t" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:47.244538  401618 pod_ready.go:81] duration metric: took 399.927693ms waiting for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:47.244548  401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:47.644184  401618 pod_ready.go:92] pod "kube-scheduler-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:47.644203  401618 pod_ready.go:81] duration metric: took 399.648889ms waiting for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:47.644210  401618 pod_ready.go:38] duration metric: took 2.180378997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0323 23:26:47.644231  401618 api_server.go:51] waiting for apiserver process to appear ...
	I0323 23:26:47.644265  401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0323 23:26:47.660462  401618 api_server.go:71] duration metric: took 2.230343116s to wait for apiserver process to appear ...
	I0323 23:26:47.660489  401618 api_server.go:87] waiting for apiserver healthz status ...
	I0323 23:26:47.660508  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:47.667464  401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0323 23:26:47.668285  401618 api_server.go:140] control plane version: v1.26.3
	I0323 23:26:47.668303  401618 api_server.go:130] duration metric: took 7.807644ms to wait for apiserver health ...
	I0323 23:26:47.668310  401618 system_pods.go:43] waiting for kube-system pods to appear ...
	I0323 23:26:47.847116  401618 system_pods.go:59] 6 kube-system pods found
	I0323 23:26:47.847153  401618 system_pods.go:61] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running
	I0323 23:26:47.847161  401618 system_pods.go:61] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
	I0323 23:26:47.847168  401618 system_pods.go:61] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
	I0323 23:26:47.847175  401618 system_pods.go:61] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
	I0323 23:26:47.847181  401618 system_pods.go:61] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
	I0323 23:26:47.847187  401618 system_pods.go:61] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
	I0323 23:26:47.847193  401618 system_pods.go:74] duration metric: took 178.878592ms to wait for pod list to return data ...
	I0323 23:26:47.847201  401618 default_sa.go:34] waiting for default service account to be created ...
	I0323 23:26:48.044586  401618 default_sa.go:45] found service account: "default"
	I0323 23:26:48.044616  401618 default_sa.go:55] duration metric: took 197.409776ms for default service account to be created ...
	I0323 23:26:48.044630  401618 system_pods.go:116] waiting for k8s-apps to be running ...
	I0323 23:26:48.247931  401618 system_pods.go:86] 6 kube-system pods found
	I0323 23:26:48.247963  401618 system_pods.go:89] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running
	I0323 23:26:48.247974  401618 system_pods.go:89] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
	I0323 23:26:48.247980  401618 system_pods.go:89] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
	I0323 23:26:48.247986  401618 system_pods.go:89] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
	I0323 23:26:48.247991  401618 system_pods.go:89] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
	I0323 23:26:48.247999  401618 system_pods.go:89] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
	I0323 23:26:48.248007  401618 system_pods.go:126] duration metric: took 203.371205ms to wait for k8s-apps to be running ...
	I0323 23:26:48.248015  401618 system_svc.go:44] waiting for kubelet service to be running ....
	I0323 23:26:48.248065  401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0323 23:26:48.258927  401618 system_svc.go:56] duration metric: took 10.902515ms WaitForService to wait for kubelet.
	I0323 23:26:48.258954  401618 kubeadm.go:578] duration metric: took 2.828842444s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0323 23:26:48.258976  401618 node_conditions.go:102] verifying NodePressure condition ...
	I0323 23:26:48.449583  401618 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0323 23:26:48.449608  401618 node_conditions.go:123] node cpu capacity is 8
	I0323 23:26:48.449620  401618 node_conditions.go:105] duration metric: took 190.638556ms to run NodePressure ...
	I0323 23:26:48.449633  401618 start.go:228] waiting for startup goroutines ...
	I0323 23:26:48.449641  401618 start.go:233] waiting for cluster config update ...
	I0323 23:26:48.449652  401618 start.go:242] writing updated cluster config ...
	I0323 23:26:48.450019  401618 ssh_runner.go:195] Run: rm -f paused
	I0323 23:26:48.534780  401618 start.go:554] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0)
	I0323 23:26:48.538018  401618 out.go:177] * Done! kubectl is now configured to use "pause-574316" cluster and "default" namespace by default
	I0323 23:26:44.308331  427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Running}}
	I0323 23:26:44.394439  427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Status}}
	I0323 23:26:44.471392  427158 cli_runner.go:164] Run: docker exec force-systemd-env-286741 stat /var/lib/dpkg/alternatives/iptables
	I0323 23:26:44.603293  427158 oci.go:144] the created container "force-systemd-env-286741" has a running status.
	I0323 23:26:44.603330  427158 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa...
	I0323 23:26:44.920036  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0323 23:26:44.920082  427158 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0323 23:26:45.161321  427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Status}}
	I0323 23:26:45.251141  427158 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0323 23:26:45.251176  427158 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-286741 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0323 23:26:45.400052  427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Status}}
	I0323 23:26:45.485912  427158 machine.go:88] provisioning docker machine ...
	I0323 23:26:45.485973  427158 ubuntu.go:169] provisioning hostname "force-systemd-env-286741"
	I0323 23:26:45.486046  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:45.565967  427158 main.go:141] libmachine: Using SSH client type: native
	I0323 23:26:45.566601  427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I0323 23:26:45.566627  427158 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-286741 && echo "force-systemd-env-286741" | sudo tee /etc/hostname
	I0323 23:26:45.780316  427158 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-286741
	
	I0323 23:26:45.780413  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:45.856411  427158 main.go:141] libmachine: Using SSH client type: native
	I0323 23:26:45.857051  427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I0323 23:26:45.857097  427158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-286741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-286741/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-286741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0323 23:26:45.977892  427158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0323 23:26:45.977934  427158 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16143-62012/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-62012/.minikube}
	I0323 23:26:45.977978  427158 ubuntu.go:177] setting up certificates
	I0323 23:26:45.977996  427158 provision.go:83] configureAuth start
	I0323 23:26:45.978074  427158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-286741
	I0323 23:26:46.057572  427158 provision.go:138] copyHostCerts
	I0323 23:26:46.057625  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem
	I0323 23:26:46.057666  427158 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem, removing ...
	I0323 23:26:46.057678  427158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem
	I0323 23:26:46.057752  427158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem (1078 bytes)
	I0323 23:26:46.057846  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem
	I0323 23:26:46.057875  427158 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem, removing ...
	I0323 23:26:46.057885  427158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem
	I0323 23:26:46.057920  427158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem (1123 bytes)
	I0323 23:26:46.057987  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem
	I0323 23:26:46.058014  427158 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem, removing ...
	I0323 23:26:46.058025  427158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem
	I0323 23:26:46.058056  427158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem (1675 bytes)
	I0323 23:26:46.058133  427158 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-286741 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-env-286741]
	I0323 23:26:46.508497  427158 provision.go:172] copyRemoteCerts
	I0323 23:26:46.508591  427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0323 23:26:46.508655  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:46.583159  427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
	I0323 23:26:46.668948  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0323 23:26:46.669009  427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0323 23:26:46.687152  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0323 23:26:46.687222  427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
	I0323 23:26:46.706760  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0323 23:26:46.706834  427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0323 23:26:46.724180  427158 provision.go:86] duration metric: configureAuth took 746.155987ms
	I0323 23:26:46.724211  427158 ubuntu.go:193] setting minikube options for container-runtime
	I0323 23:26:46.724415  427158 config.go:182] Loaded profile config "force-systemd-env-286741": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:26:46.724478  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:46.793992  427158 main.go:141] libmachine: Using SSH client type: native
	I0323 23:26:46.794421  427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I0323 23:26:46.794437  427158 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0323 23:26:46.909667  427158 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0323 23:26:46.909696  427158 ubuntu.go:71] root file system type: overlay
	I0323 23:26:46.909827  427158 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0323 23:26:46.909896  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:46.979665  427158 main.go:141] libmachine: Using SSH client type: native
	I0323 23:26:46.980533  427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I0323 23:26:46.980649  427158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0323 23:26:47.134741  427158 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0323 23:26:47.134814  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:47.203471  427158 main.go:141] libmachine: Using SSH client type: native
	I0323 23:26:47.203895  427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I0323 23:26:47.203914  427158 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0323 23:26:47.958910  427158 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-23 23:26:47.129506351 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0323 23:26:47.958955  427158 machine.go:91] provisioned docker machine in 2.473006765s
	I0323 23:26:47.958969  427158 client.go:171] LocalClient.Create took 8.678571965s
	I0323 23:26:47.958985  427158 start.go:167] duration metric: libmachine.API.Create for "force-systemd-env-286741" took 8.67862836s
	I0323 23:26:47.959002  427158 start.go:300] post-start starting for "force-systemd-env-286741" (driver="docker")
	I0323 23:26:47.959010  427158 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0323 23:26:47.959086  427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0323 23:26:47.959133  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:48.039006  427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
	I0323 23:26:48.138241  427158 ssh_runner.go:195] Run: cat /etc/os-release
	I0323 23:26:48.141753  427158 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0323 23:26:48.141790  427158 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0323 23:26:48.141804  427158 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0323 23:26:48.141812  427158 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0323 23:26:48.141823  427158 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-62012/.minikube/addons for local assets ...
	I0323 23:26:48.141882  427158 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-62012/.minikube/files for local assets ...
	I0323 23:26:48.141972  427158 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem -> 687022.pem in /etc/ssl/certs
	I0323 23:26:48.141981  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem -> /etc/ssl/certs/687022.pem
	I0323 23:26:48.142083  427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0323 23:26:48.149479  427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem --> /etc/ssl/certs/687022.pem (1708 bytes)
	I0323 23:26:48.170718  427158 start.go:303] post-start completed in 211.698395ms
	I0323 23:26:48.171159  427158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-286741
	I0323 23:26:48.255406  427158 profile.go:148] Saving config to /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/force-systemd-env-286741/config.json ...
	I0323 23:26:48.255709  427158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0323 23:26:48.255768  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:48.348731  427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
	I0323 23:26:48.444848  427158 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0323 23:26:48.454096  427158 start.go:128] duration metric: createHost completed in 9.176760391s
	I0323 23:26:48.454122  427158 start.go:83] releasing machines lock for "force-systemd-env-286741", held for 9.176923746s
	I0323 23:26:48.454203  427158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-286741
	I0323 23:26:48.544171  427158 ssh_runner.go:195] Run: cat /version.json
	I0323 23:26:48.544227  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:48.544232  427158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0323 23:26:48.544306  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:48.702573  427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
	I0323 23:26:48.713344  427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
	I0323 23:26:48.792996  427158 ssh_runner.go:195] Run: systemctl --version
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-03-23 23:25:05 UTC, end at Thu 2023-03-23 23:26:50 UTC. --
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.002500928Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.002674094Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.002709828Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.003286601Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.025479889Z" level=info msg="Loading containers: start."
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.172830226Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.214010134Z" level=info msg="Loading containers: done."
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.225800214Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.225888560Z" level=info msg="Daemon has completed initialization"
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.240113456Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 23 23:25:49 pause-574316 systemd[1]: Started Docker Application Container Engine.
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.246358737Z" level=info msg="API listen on [::]:2376"
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.256115277Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 23 23:26:11 pause-574316 dockerd[5186]: time="2023-03-23T23:26:11.796102440Z" level=info msg="ignoring event" container=6a198df97e4bd33611868552786c34b16a1896b4a18709ad6eaa65e7486b5d20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.145302003Z" level=info msg="ignoring event" container=45416a5cd36b4138409f0bf454eb922e1d3369a86ce1c0c803f7da26778cf7f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.379532489Z" level=info msg="ignoring event" container=60c1dee0f1786db1b413aa688e7a57acd71e6c18979e95b21131d3496a98cad8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.392985764Z" level=info msg="ignoring event" container=840b0c35d4448d1362a7bc020e0fac35331ad72438dfc00e79685e0baca6b11b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.453179245Z" level=info msg="ignoring event" container=656b70fafbc2b7e6611131272fea7433846a18987047e3c8d2e446e8b5290cce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.457378879Z" level=info msg="ignoring event" container=f70a37494730e3c42d183c94cd69472a7f672f61f330f75482164f78d4eda989 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.459285840Z" level=info msg="ignoring event" container=2b7bc2ac835be2dc569bede97afe45c6357e58e4e23f23539dc1433d3a84bedc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.460667173Z" level=info msg="ignoring event" container=d517e8e4d5d2dbd1822c028a0de7f091686d0e0657198f93573dd122ee6485a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.460699514Z" level=info msg="ignoring event" container=4b1c73f39f8c07193f987da6a6d6784c9f87cb43caa7ea5f424e367b0f2e27e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.467741307Z" level=info msg="ignoring event" container=80c388522552702a89135b09d2d073b9c57d1fbc851a0a89b0cec032be049f71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.471167750Z" level=info msg="ignoring event" container=7ff3dcd747a3b0f733eda143cf5993de0d0e1afd3dbd1b2b2f9a8fd3dbea2be9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:25 pause-574316 dockerd[5186]: time="2023-03-23T23:26:25.347736368Z" level=info msg="ignoring event" container=a9b1dc3910d9b5195bfff4b0d6cedbf54b214159654d4e23645c839bf053ad23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	0f0398bddb511       5185b96f0becf       18 seconds ago      Running             coredns                   3                   542477f9c5e1d
	43a8930300a57       92ed2bec97a63       18 seconds ago      Running             kube-proxy                2                   28a061395dad5
	e7cd8ca7c7242       5a79047369329       23 seconds ago      Running             kube-scheduler            3                   4c131416edb23
	f946ab43717f1       ce8c2293ef09c       23 seconds ago      Running             kube-controller-manager   3                   3ca9ec9bef2c4
	1137111a33d08       fce326961ae2d       23 seconds ago      Running             etcd                      3                   f4e9af6f99313
	cea7ca7eb9ad0       1d9b3cbae03ce       28 seconds ago      Running             kube-apiserver            2                   f84cdf335e887
	656b70fafbc2b       fce326961ae2d       39 seconds ago      Exited              etcd                      2                   60c1dee0f1786
	2b7bc2ac835be       5a79047369329       50 seconds ago      Exited              kube-scheduler            2                   4b1c73f39f8c0
	7ff3dcd747a3b       92ed2bec97a63       51 seconds ago      Exited              kube-proxy                1                   d517e8e4d5d2d
	45416a5cd36b4       ce8c2293ef09c       51 seconds ago      Exited              kube-controller-manager   2                   f70a37494730e
	a9b1dc3910d9b       5185b96f0becf       59 seconds ago      Exited              coredns                   2                   840b0c35d4448
	6a198df97e4bd       1d9b3cbae03ce       59 seconds ago      Exited              kube-apiserver            1                   80c3885225527
	
	* 
	* ==> coredns [0f0398bddb51] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:52573 - 39862 "HINFO IN 4074527240347548607.320685648437704123. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.037884079s
	
	* 
	* ==> coredns [a9b1dc3910d9] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:45219 - 2821 "HINFO IN 6139167459808748397.3590652508084774261. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035135004s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-574316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-574316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9478c9159ab3ccef5e7f933edc25c8da75bed69
	                    minikube.k8s.io/name=pause-574316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_03_23T23_25_21_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Mar 2023 23:25:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-574316
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Mar 2023 23:26:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Mar 2023 23:26:30 +0000   Thu, 23 Mar 2023 23:25:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Mar 2023 23:26:30 +0000   Thu, 23 Mar 2023 23:25:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Mar 2023 23:26:30 +0000   Thu, 23 Mar 2023 23:25:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Mar 2023 23:26:30 +0000   Thu, 23 Mar 2023 23:25:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-574316
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 b249c14bbd9147e887f6315aff00ef06
	  System UUID:                7bdff168-7cdd-493c-bdda-f1cc26739b6e
	  Boot ID:                    9d192f19-d9f5-4df3-a502-4030f2da5343
	  Kernel Version:             5.15.0-1030-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.3
	  Kube-Proxy Version:         v1.26.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-lljqk                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     76s
	  kube-system                 etcd-pause-574316                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         88s
	  kube-system                 kube-apiserver-pause-574316             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-controller-manager-pause-574316    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-lnk2t                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-scheduler-pause-574316             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientPID     96s (x3 over 96s)  kubelet          Node pause-574316 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    96s (x4 over 96s)  kubelet          Node pause-574316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  96s (x4 over 96s)  kubelet          Node pause-574316 status is now: NodeHasSufficientMemory
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node pause-574316 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node pause-574316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                kubelet          Node pause-574316 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             89s                kubelet          Node pause-574316 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                78s                kubelet          Node pause-574316 status is now: NodeReady
	  Normal  RegisteredNode           77s                node-controller  Node pause-574316 event: Registered Node pause-574316 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-574316 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-574316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-574316 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-574316 event: Registered Node pause-574316 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000619] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da 9a 31 26 91 58 08 06
	[ +46.489619] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 03 7b bf b1 b8 08 06
	[Mar23 23:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 12 06 3d f3 17 47 08 06
	[Mar23 23:21] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0e 93 92 d3 0d 7e 08 06
	[  +0.437885] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 93 92 d3 0d 7e 08 06
	[Mar23 23:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 9e 53 5f 42 99 08 06
	[Mar23 23:23] process 'docker/tmp/qemu-check941714971/check' started with executable stack
	[  +9.389883] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e f3 36 2c c1 cd 08 06
	[Mar23 23:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae cb 28 07 13 77 08 06
	[  +0.012995] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 0c 92 4c a9 1c 08 06
	[ +15.547404] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 10 ab 83 31 f9 08 06
	[Mar23 23:26] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 20 81 ad 5c b9 08 06
	[  +5.887427] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 6b a8 e3 05 d7 08 06
	
	* 
	* ==> etcd [1137111a33d0] <==
	* {"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 5"}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 5"}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-574316 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-23T23:26:29.060Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-23T23:26:29.061Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-23T23:26:29.061Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-03-23T23:26:45.146Z","caller":"traceutil/trace.go:171","msg":"trace[1088553463] linearizableReadLoop","detail":"{readStateIndex:500; appliedIndex:499; }","duration":"187.629875ms","start":"2023-03-23T23:26:44.958Z","end":"2023-03-23T23:26:45.146Z","steps":["trace[1088553463] 'read index received'  (duration: 113.126176ms)","trace[1088553463] 'applied index is now lower than readState.Index'  (duration: 74.502878ms)"],"step_count":2}
	{"level":"info","ts":"2023-03-23T23:26:45.146Z","caller":"traceutil/trace.go:171","msg":"trace[1657399943] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"197.637334ms","start":"2023-03-23T23:26:44.948Z","end":"2023-03-23T23:26:45.146Z","steps":["trace[1657399943] 'process raft request'  (duration: 123.099553ms)","trace[1657399943] 'compare'  (duration: 74.347233ms)"],"step_count":2}
	{"level":"warn","ts":"2023-03-23T23:26:45.146Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"187.827176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-574316\" ","response":"range_response_count:1 size:6942"}
	{"level":"info","ts":"2023-03-23T23:26:45.146Z","caller":"traceutil/trace.go:171","msg":"trace[666014890] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-574316; range_end:; response_count:1; response_revision:463; }","duration":"187.950429ms","start":"2023-03-23T23:26:44.958Z","end":"2023-03-23T23:26:45.146Z","steps":["trace[666014890] 'agreement among raft nodes before linearized reading'  (duration: 187.770048ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-23T23:26:45.429Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"133.41564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:764"}
	{"level":"info","ts":"2023-03-23T23:26:45.429Z","caller":"traceutil/trace.go:171","msg":"trace[1689761979] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:463; }","duration":"133.510104ms","start":"2023-03-23T23:26:45.295Z","end":"2023-03-23T23:26:45.429Z","steps":["trace[1689761979] 'range keys from in-memory index tree'  (duration: 133.250873ms)"],"step_count":1}
	
	* 
	* ==> etcd [656b70fafbc2] <==
	* {"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-574316 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-23T23:26:14.577Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-03-23T23:26:14.577Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-23T23:26:20.377Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-03-23T23:26:20.377Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-574316","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"info","ts":"2023-03-23T23:26:20.380Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-03-23T23:26:20.382Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-23T23:26:20.384Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-23T23:26:20.384Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-574316","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  23:26:50 up  2:09,  0 users,  load average: 5.27, 4.14, 2.82
	Linux pause-574316 5.15.0-1030-gcp #37~20.04.1-Ubuntu SMP Mon Feb 20 04:30:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [6a198df97e4b] <==
	* W0323 23:26:08.603014       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0323 23:26:09.405661       1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0323 23:26:09.657900       1 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0323 23:26:11.774251       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-apiserver [cea7ca7eb9ad] <==
	* I0323 23:26:30.648351       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0323 23:26:30.648430       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0323 23:26:30.684300       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0323 23:26:30.639853       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0323 23:26:30.639867       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0323 23:26:30.639933       1 autoregister_controller.go:141] Starting autoregister controller
	I0323 23:26:30.690081       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0323 23:26:30.690161       1 cache.go:39] Caches are synced for autoregister controller
	I0323 23:26:30.701389       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0323 23:26:30.750507       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0323 23:26:30.750975       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0323 23:26:30.752373       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0323 23:26:30.752385       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0323 23:26:30.752497       1 shared_informer.go:280] Caches are synced for configmaps
	I0323 23:26:30.753570       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0323 23:26:30.753615       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0323 23:26:31.339987       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0323 23:26:31.646840       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0323 23:26:32.375391       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0323 23:26:32.388141       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0323 23:26:32.474747       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0323 23:26:32.557448       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0323 23:26:32.566478       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0323 23:26:43.845098       1 controller.go:615] quota admission added evaluator for: endpoints
	I0323 23:26:43.899216       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [45416a5cd36b] <==
	* I0323 23:25:59.829591       1 serving.go:348] Generated self-signed cert in-memory
	I0323 23:26:00.084118       1 controllermanager.go:182] Version: v1.26.3
	I0323 23:26:00.084152       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0323 23:26:00.085310       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0323 23:26:00.085306       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0323 23:26:00.085554       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0323 23:26:00.085646       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	F0323 23:26:20.087377       1 controllermanager.go:228] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	
	* 
	* ==> kube-controller-manager [f946ab43717f] <==
	* I0323 23:26:43.682858       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0323 23:26:43.685481       1 shared_informer.go:280] Caches are synced for GC
	I0323 23:26:43.691799       1 shared_informer.go:280] Caches are synced for HPA
	I0323 23:26:43.691846       1 shared_informer.go:280] Caches are synced for daemon sets
	I0323 23:26:43.691921       1 shared_informer.go:280] Caches are synced for PVC protection
	I0323 23:26:43.691962       1 shared_informer.go:280] Caches are synced for endpoint
	I0323 23:26:43.692814       1 shared_informer.go:280] Caches are synced for ephemeral
	I0323 23:26:43.692841       1 shared_informer.go:280] Caches are synced for cronjob
	I0323 23:26:43.692907       1 shared_informer.go:280] Caches are synced for service account
	I0323 23:26:43.696646       1 shared_informer.go:280] Caches are synced for taint
	I0323 23:26:43.696746       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	I0323 23:26:43.696779       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	W0323 23:26:43.696843       1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-574316. Assuming now as a timestamp.
	I0323 23:26:43.696884       1 taint_manager.go:211] "Sending events to api server"
	I0323 23:26:43.696913       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0323 23:26:43.697076       1 event.go:294] "Event occurred" object="pause-574316" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-574316 event: Registered Node pause-574316 in Controller"
	I0323 23:26:43.698625       1 shared_informer.go:280] Caches are synced for crt configmap
	I0323 23:26:43.701545       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0323 23:26:43.740889       1 shared_informer.go:280] Caches are synced for attach detach
	I0323 23:26:43.792552       1 shared_informer.go:280] Caches are synced for disruption
	I0323 23:26:43.821372       1 shared_informer.go:280] Caches are synced for resource quota
	I0323 23:26:43.894489       1 shared_informer.go:280] Caches are synced for resource quota
	I0323 23:26:44.210014       1 shared_informer.go:280] Caches are synced for garbage collector
	I0323 23:26:44.229157       1 shared_informer.go:280] Caches are synced for garbage collector
	I0323 23:26:44.229247       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [43a8930300a5] <==
	* I0323 23:26:32.502821       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0323 23:26:32.502919       1 server_others.go:109] "Detected node IP" address="192.168.67.2"
	I0323 23:26:32.503040       1 server_others.go:535] "Using iptables proxy"
	I0323 23:26:32.581352       1 server_others.go:176] "Using iptables Proxier"
	I0323 23:26:32.581492       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0323 23:26:32.581507       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0323 23:26:32.581525       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0323 23:26:32.581580       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0323 23:26:32.582126       1 server.go:655] "Version info" version="v1.26.3"
	I0323 23:26:32.582166       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0323 23:26:32.582886       1 config.go:226] "Starting endpoint slice config controller"
	I0323 23:26:32.583504       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0323 23:26:32.583082       1 config.go:317] "Starting service config controller"
	I0323 23:26:32.583523       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0323 23:26:32.583137       1 config.go:444] "Starting node config controller"
	I0323 23:26:32.583545       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0323 23:26:32.684533       1 shared_informer.go:280] Caches are synced for service config
	I0323 23:26:32.684613       1 shared_informer.go:280] Caches are synced for node config
	I0323 23:26:32.684623       1 shared_informer.go:280] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [7ff3dcd747a3] <==
	* E0323 23:26:09.977748       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": net/http: TLS handshake timeout
	E0323 23:26:12.783360       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": dial tcp 192.168.67.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.67.2:39882->192.168.67.2:8443: read: connection reset by peer
	E0323 23:26:14.853949       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:18.965897       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": dial tcp 192.168.67.2:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [2b7bc2ac835b] <==
	* W0323 23:26:16.679162       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:16.679200       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:16.812219       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:16.812268       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:16.846940       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:16.846981       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:17.007369       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:17.007406       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:19.575702       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:19.575741       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:19.775890       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:19.775937       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:19.850977       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:19.851021       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:20.060721       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:20.060762       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:20.080470       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:20.080525       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:20.208535       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:20.208595       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	I0323 23:26:20.353988       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0323 23:26:20.354103       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0323 23:26:20.354167       1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0323 23:26:20.354182       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0323 23:26:20.354209       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [e7cd8ca7c724] <==
	* I0323 23:26:28.403386       1 serving.go:348] Generated self-signed cert in-memory
	I0323 23:26:30.771476       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.3"
	I0323 23:26:30.771503       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0323 23:26:30.778353       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0323 23:26:30.778381       1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0323 23:26:30.778428       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0323 23:26:30.778441       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0323 23:26:30.778478       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0323 23:26:30.778489       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0323 23:26:30.779761       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0323 23:26:30.784753       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0323 23:26:30.878975       1 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController
	I0323 23:26:30.879041       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0323 23:26:30.878980       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-03-23 23:25:05 UTC, end at Thu 2023-03-23 23:26:50 UTC. --
	Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.503080    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16bcc950c7983e1395e2f1091ca3b040-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-574316\" (UID: \"16bcc950c7983e1395e2f1091ca3b040\") " pod="kube-system/kube-controller-manager-pause-574316"
	Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.748833    7640 scope.go:115] "RemoveContainer" containerID="656b70fafbc2b7e6611131272fea7433846a18987047e3c8d2e446e8b5290cce"
	Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.763712    7640 scope.go:115] "RemoveContainer" containerID="45416a5cd36b4138409f0bf454eb922e1d3369a86ce1c0c803f7da26778cf7f4"
	Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.773578    7640 scope.go:115] "RemoveContainer" containerID="2b7bc2ac835be2dc569bede97afe45c6357e58e4e23f23539dc1433d3a84bedc"
	Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.818789    7640 kubelet_node_status.go:108] "Node was previously registered" node="pause-574316"
	Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.819442    7640 kubelet_node_status.go:73] "Successfully registered node" node="pause-574316"
	Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.821124    7640 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.827327    7640 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.062727    7640 apiserver.go:52] "Watching apiserver"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.069251    7640 topology_manager.go:210] "Topology Admit Handler"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.069369    7640 topology_manager.go:210] "Topology Admit Handler"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.069450    7640 topology_manager.go:210] "Topology Admit Handler"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.098738    7640 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.160848    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxzp5\" (UniqueName: \"kubernetes.io/projected/aeba9090-2690-42e1-8439-a0cd55ada6d0-kube-api-access-kxzp5\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.160919    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wm5m\" (UniqueName: \"kubernetes.io/projected/ce593e1c-39de-4a21-994e-157f74ab568e-kube-api-access-8wm5m\") pod \"coredns-787d4945fb-lljqk\" (UID: \"ce593e1c-39de-4a21-994e-157f74ab568e\") " pod="kube-system/coredns-787d4945fb-lljqk"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.160966    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeba9090-2690-42e1-8439-a0cd55ada6d0-lib-modules\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161002    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce593e1c-39de-4a21-994e-157f74ab568e-config-volume\") pod \"coredns-787d4945fb-lljqk\" (UID: \"ce593e1c-39de-4a21-994e-157f74ab568e\") " pod="kube-system/coredns-787d4945fb-lljqk"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161027    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aeba9090-2690-42e1-8439-a0cd55ada6d0-kube-proxy\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161059    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeba9090-2690-42e1-8439-a0cd55ada6d0-xtables-lock\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161088    7640 reconciler.go:41] "Reconciler: start to sync state"
	Mar 23 23:26:32 pause-574316 kubelet[7640]: I0323 23:26:32.271414    7640 scope.go:115] "RemoveContainer" containerID="7ff3dcd747a3b0f733eda143cf5993de0d0e1afd3dbd1b2b2f9a8fd3dbea2be9"
	Mar 23 23:26:32 pause-574316 kubelet[7640]: I0323 23:26:32.700707    7640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="542477f9c5e1de564352e093d277e29ea04f9ada02cdebe4924d534ea2be3623"
	Mar 23 23:26:34 pause-574316 kubelet[7640]: I0323 23:26:34.734860    7640 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Mar 23 23:26:35 pause-574316 kubelet[7640]: I0323 23:26:35.343216    7640 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=05fc3b9f-534f-4c25-ab9a-0f1ea4cb9014 path="/var/lib/kubelet/pods/05fc3b9f-534f-4c25-ab9a-0f1ea4cb9014/volumes"
	Mar 23 23:26:37 pause-574316 kubelet[7640]: I0323 23:26:37.006845    7640 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-574316 -n pause-574316
helpers_test.go:261: (dbg) Run:  kubectl --context pause-574316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-574316
helpers_test.go:235: (dbg) docker inspect pause-574316:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb",
	        "Created": "2023-03-23T23:25:04.583396388Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 390898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-23T23:25:05.007909282Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/hosts",
	        "LogPath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb-json.log",
	        "Name": "/pause-574316",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-574316:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-574316",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47-init/diff:/var/lib/docker/overlay2/d356d443959743e8c5ec1e688b0ccaccd2483fd24991ca327095d1ea51dadd79/diff:/var/lib/docker/overlay2/dd1855d68604dc5432757610d41f6488e2cf65b7ade63d0ac4dd50e3cb700545/diff:/var/lib/docker/overlay2/3ae5a9ac34ca4f4036f376d3f7ee1e6d806107b6ba140eee2af2df3088fe2af4/diff:/var/lib/docker/overlay2/a88a7a03b1dddb065d2da925165770d1982de0fb6388d7798dec4a6c996388ed/diff:/var/lib/docker/overlay2/11e0cdbbdfb5d84e0d99a3d4a7693f825097d37baa31784b182606407b254347/diff:/var/lib/docker/overlay2/f3679d076f087c60feb261250bae0ef050d7ed7a8876697b61f4e74260ac5c25/diff:/var/lib/docker/overlay2/3a9213ab7d98194272e65090b79370f92e0fed3b68466ca89c2fce6cc06bee37/diff:/var/lib/docker/overlay2/c7e7b51e4ed37e163c31a7a2769a396f00a3a46bbe043bb3d74144e3d7dbdf4b/diff:/var/lib/docker/overlay2/a5a37da3c24f5ba9b69245b491d59fa7f875d4bf22ab2d3b4fe2e0480245836e/diff:/var/lib/docker/overlay2/f36025
f30104b76500045a0755939ab273914eecce2e91f0541c32de5325546f/diff:/var/lib/docker/overlay2/ef9ccd83ee71ed9d46782a820551dbda8865609796f631a741766fab9be9c04b/diff:/var/lib/docker/overlay2/e105b68b5b16f55e25547056d8ce228bdac36d93107fd4a3a78c8b026fbe0140/diff:/var/lib/docker/overlay2/75ca52704ffd583bb6fbed231278a5c352311cb4dee88f8b731377a47cdf43cd/diff:/var/lib/docker/overlay2/70a153c20f330aaea42285756d01aeb9a3e45e8909ea0b266c7d189438588e4b/diff:/var/lib/docker/overlay2/e07683b025df1da95650fadc2612b6df0024b6d4ab531cf439bb426bb94dd7c6/diff:/var/lib/docker/overlay2/a9c09db98b0de89a8bd85bb42c47585ec8dd924dfea9913e0e1e581771cb76db/diff:/var/lib/docker/overlay2/467577b0b0b8cb64beff8ef36e7da084fb7cddcdea88ced35ada883720038870/diff:/var/lib/docker/overlay2/89ecada524594426b58db802e9a64eff841e5a0dda6609f65ba80c77dc71866e/diff:/var/lib/docker/overlay2/d2e226af46510168fcd51d532ca7a03e77c9d9eb5253b85afd78b26e7b839180/diff:/var/lib/docker/overlay2/e7c1552e27888c5d4d72be70f7b4614ac96872e390e99ad721f043fa28cdc212/diff:/var/lib/d
ocker/overlay2/3074211fc4276144c82302477aac25cc2363357462b8212747bf9a6abdb179b8/diff:/var/lib/docker/overlay2/2f0eed0a121e12185ea49a07f0a026b7cd3add1c64e943d8f00609db9cb06035/diff:/var/lib/docker/overlay2/efa9237fe1d3ed78c6d7939b6d7a46778b6c3851395039e00da7e7ba1c07743d/diff:/var/lib/docker/overlay2/0ca055233446f0ea58f8b702a09b991f77ae9c6f1a338762761848f3a4b12d4e/diff:/var/lib/docker/overlay2/aa7036e406ea8fcd3317c56097ff3b2227796276b2a8ab2f3f7103fed4dfa3b5/diff:/var/lib/docker/overlay2/2f3123bc47bc73bed1b1f7f75675e13e493ca4c8e4f5c4cb662aae58d9373cca/diff:/var/lib/docker/overlay2/1275037c371fbe052f7ca3e9c640764633c72ba9f3d6954b012d34cae8b5d69d/diff:/var/lib/docker/overlay2/7b9c1ddebbcba2b26d07bd7fba9c0fd87ce195be38c2a75f219ac7de57f85b3f/diff:/var/lib/docker/overlay2/2b39bb0f285174bfa621ed101af05ba3552825ab700a73135af1e8b8d7f0bb81/diff:/var/lib/docker/overlay2/643ab8ec872c6defa175401a06dd4a300105c4061619e41059a39a3ee35e3d40/diff:/var/lib/docker/overlay2/713ee57325a771a6a041c255726b832978f929eb1147c72212d96dd7dde
734b2/diff:/var/lib/docker/overlay2/19c1f1f71db682b75e904ad1c7d909f372d24486542012874e578917dc9a9bdf/diff:/var/lib/docker/overlay2/d26fed6403eddd78cf74be1d4a1f4012e1edccb465491f947e4746d92cebcd56/diff:/var/lib/docker/overlay2/0086cdc0bd9c0e4bd086d59a3944cac9d08674d00c80fa77d1f9faa935a5fb19/diff:/var/lib/docker/overlay2/9e14b9f084a1ea7826ee394f169e32a19b56fa135bde5da69486094355c778bb/diff:/var/lib/docker/overlay2/92af9bb2d1b59e9a45cd00af02a78ed7edab34388b268ad30cf749708e273ee8/diff:/var/lib/docker/overlay2/b13dcd677cb58d34d216059052299c900b1728fe3d46ae29cdf0f9a6991696ac/diff:/var/lib/docker/overlay2/30ba19dfbdf89b50aa26fe1695664407f059e1a354830d1d0363128794c81c8f/diff:/var/lib/docker/overlay2/0a91cb0450bc46b302d1b3518574e94a65ab366928b7b67d4dd446e682a14338/diff:/var/lib/docker/overlay2/0b3c4aae10bf80ea7c918fa052ad5ed468c2ebe01aa2f0658bc20304d1f6b07e/diff:/var/lib/docker/overlay2/9602ed727f176a29d28ed2d2045ad3c93f4ec63578399744c69db3d3057f1ed7/diff:/var/lib/docker/overlay2/33399f037b75aa41b061c2f9330cd6f041c290
9051f6ad5b09141a0346202db9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-574316",
	                "Source": "/var/lib/docker/volumes/pause-574316/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-574316",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-574316",
	                "name.minikube.sigs.k8s.io": "pause-574316",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "83727ed535e639dbb7b60a28c289ec43475eb83a2bfc731da6a7d8b3710be5ba",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32989"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32988"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32985"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32987"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32986"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/83727ed535e6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-574316": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "973cf0ca8459",
	                        "pause-574316"
	                    ],
	                    "NetworkID": "2400bfbdd9cf00f3450521e73ae0be02c2bb9e5678c8bce35f9e0dc4ced8fa23",
	                    "EndpointID": "1af4d5eb5080f4897840d3dd79c7fcfc8ac3d8dcb7665dd57389ff515a84a05e",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-574316 -n pause-574316
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-574316 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-574316 logs -n 25: (1.280032379s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl cat kubelet                                |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl status docker --all                        |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl cat docker                                 |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /etc/docker/daemon.json                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo docker                         | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | system info                                          |                          |         |         |                     |                     |
	| start   | -p force-systemd-env-286741                          | force-systemd-env-286741 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | --memory=2048                                        |                          |         |         |                     |                     |
	|         | --alsologtostderr                                    |                          |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                          |         |         |                     |                     |
	|         | --container-runtime=docker                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl status cri-docker                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | cri-dockerd --version                                |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl status containerd                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl cat containerd                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo cat                            | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | containerd config dump                               |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl status crio --all                          |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo                                | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo find                           | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |         |         |                     |                     |
	| ssh     | -p cilium-452361 sudo crio                           | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | config                                               |                          |         |         |                     |                     |
	| delete  | -p cilium-452361                                     | cilium-452361            | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | 23 Mar 23 23:26 UTC |
	| start   | -p old-k8s-version-063647                            | old-k8s-version-063647   | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC |                     |
	|         | --memory=2200                                        |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                          |         |         |                     |                     |
	|         | --kvm-network=default                                |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                          |         |         |                     |                     |
	|         | --keep-context=false                                 |                          |         |         |                     |                     |
	|         | --driver=docker                                      |                          |         |         |                     |                     |
	|         | --container-runtime=docker                           |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                          |         |         |                     |                     |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/23 23:26:40
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0323 23:26:40.042149  428061 out.go:296] Setting OutFile to fd 1 ...
	I0323 23:26:40.042248  428061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:26:40.042257  428061 out.go:309] Setting ErrFile to fd 2...
	I0323 23:26:40.042261  428061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:26:40.042366  428061 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16143-62012/.minikube/bin
	I0323 23:26:40.042954  428061 out.go:303] Setting JSON to false
	I0323 23:26:40.047193  428061 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7746,"bootTime":1679606254,"procs":1211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0323 23:26:40.047254  428061 start.go:135] virtualization: kvm guest
	I0323 23:26:40.049796  428061 out.go:177] * [old-k8s-version-063647] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0323 23:26:40.051284  428061 out.go:177]   - MINIKUBE_LOCATION=16143
	I0323 23:26:40.051309  428061 notify.go:220] Checking for updates...
	I0323 23:26:40.052905  428061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0323 23:26:40.054785  428061 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	I0323 23:26:40.056430  428061 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	I0323 23:26:40.058083  428061 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0323 23:26:40.059646  428061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0323 23:26:40.061783  428061 config.go:182] Loaded profile config "force-systemd-env-286741": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:26:40.061882  428061 config.go:182] Loaded profile config "kubernetes-upgrade-120624": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-beta.0
	I0323 23:26:40.062033  428061 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:26:40.062098  428061 driver.go:365] Setting default libvirt URI to qemu:///system
	I0323 23:26:40.147368  428061 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0323 23:26:40.147472  428061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0323 23:26:40.295961  428061 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2023-03-23 23:26:40.275708441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0323 23:26:40.296057  428061 docker.go:294] overlay module found
	I0323 23:26:40.298752  428061 out.go:177] * Using the docker driver based on user configuration
	I0323 23:26:40.300448  428061 start.go:295] selected driver: docker
	I0323 23:26:40.300468  428061 start.go:856] validating driver "docker" against <nil>
	I0323 23:26:40.300482  428061 start.go:867] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0323 23:26:40.301339  428061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0323 23:26:40.438182  428061 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:42 SystemTime:2023-03-23 23:26:40.428586758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0323 23:26:40.438301  428061 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0323 23:26:40.438509  428061 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0323 23:26:40.441248  428061 out.go:177] * Using Docker driver with root privileges
	I0323 23:26:40.442932  428061 cni.go:84] Creating CNI manager for ""
	I0323 23:26:40.442974  428061 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0323 23:26:40.442984  428061 start_flags.go:319] config:
	{Name:old-k8s-version-063647 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-063647 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0323 23:26:40.444845  428061 out.go:177] * Starting control plane node old-k8s-version-063647 in cluster old-k8s-version-063647
	I0323 23:26:40.446536  428061 cache.go:120] Beginning downloading kic base image for docker with docker
	I0323 23:26:40.448053  428061 out.go:177] * Pulling base image ...
	I0323 23:26:40.449652  428061 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0323 23:26:40.449683  428061 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0323 23:26:40.449703  428061 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0323 23:26:40.449720  428061 cache.go:57] Caching tarball of preloaded images
	I0323 23:26:40.449803  428061 preload.go:174] Found /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0323 23:26:40.449814  428061 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0323 23:26:40.449923  428061 profile.go:148] Saving config to /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/config.json ...
	I0323 23:26:40.449948  428061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/config.json: {Name:mkd269866aecb4e0ebd7c80fae44792dc2e78f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0323 23:26:40.540045  428061 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0323 23:26:40.540081  428061 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0323 23:26:40.540105  428061 cache.go:193] Successfully downloaded all kic artifacts
	I0323 23:26:40.540144  428061 start.go:364] acquiring machines lock for old-k8s-version-063647: {Name:mk836ec8f4a8439e66a7c2c2dcb6074efc06d654 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0323 23:26:40.540267  428061 start.go:368] acquired machines lock for "old-k8s-version-063647" in 98.708µs
	I0323 23:26:40.540298  428061 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-063647 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-063647 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0323 23:26:40.540420  428061 start.go:125] createHost starting for "" (driver="docker")
	I0323 23:26:37.666420  360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0323 23:26:37.666756  360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0323 23:26:37.915164  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0323 23:26:37.934415  360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
	I0323 23:26:37.934495  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0323 23:26:37.954816  360910 logs.go:277] 1 containers: [a90d829451b2]
	I0323 23:26:37.954881  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0323 23:26:37.973222  360910 logs.go:277] 0 containers: []
	W0323 23:26:37.973245  360910 logs.go:279] No container was found matching "coredns"
	I0323 23:26:37.973298  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0323 23:26:37.992640  360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
	I0323 23:26:37.992731  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0323 23:26:38.012097  360910 logs.go:277] 1 containers: [333ad261cea4]
	I0323 23:26:38.012179  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0323 23:26:38.030328  360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
	I0323 23:26:38.030409  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0323 23:26:38.048993  360910 logs.go:277] 0 containers: []
	W0323 23:26:38.049024  360910 logs.go:279] No container was found matching "kindnet"
	I0323 23:26:38.049080  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0323 23:26:38.068667  360910 logs.go:277] 1 containers: [eac6b13c2df0]
	I0323 23:26:38.068707  360910 logs.go:123] Gathering logs for describe nodes ...
	I0323 23:26:38.068722  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0323 23:26:38.127007  360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0323 23:26:38.127040  360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
	I0323 23:26:38.127056  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
	I0323 23:26:38.147666  360910 logs.go:123] Gathering logs for dmesg ...
	I0323 23:26:38.147691  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0323 23:26:38.168212  360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
	I0323 23:26:38.168249  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
	I0323 23:26:38.197795  360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
	I0323 23:26:38.197836  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
	I0323 23:26:38.243949  360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
	I0323 23:26:38.243989  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
	I0323 23:26:38.264103  360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
	I0323 23:26:38.264130  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
	I0323 23:26:38.288660  360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
	I0323 23:26:38.288696  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
	I0323 23:26:38.363370  360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
	I0323 23:26:38.363403  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
	I0323 23:26:38.386060  360910 logs.go:123] Gathering logs for container status ...
	I0323 23:26:38.386089  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0323 23:26:38.418791  360910 logs.go:123] Gathering logs for kubelet ...
	I0323 23:26:38.418815  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0323 23:26:38.548713  360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
	I0323 23:26:38.548764  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
	I0323 23:26:38.579492  360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
	I0323 23:26:38.579537  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
	I0323 23:26:38.618692  360910 logs.go:123] Gathering logs for Docker ...
	I0323 23:26:38.618721  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0323 23:26:41.155209  360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0323 23:26:41.155664  360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0323 23:26:41.415055  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0323 23:26:41.434873  360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
	I0323 23:26:41.434945  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0323 23:26:41.455006  360910 logs.go:277] 1 containers: [a90d829451b2]
	I0323 23:26:41.455077  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0323 23:26:41.472882  360910 logs.go:277] 0 containers: []
	W0323 23:26:41.472906  360910 logs.go:279] No container was found matching "coredns"
	I0323 23:26:41.472950  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0323 23:26:41.491292  360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
	I0323 23:26:41.491390  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0323 23:26:39.446424  401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
	I0323 23:26:41.447016  401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
	I0323 23:26:39.280123  427158 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0323 23:26:39.280357  427158 start.go:159] libmachine.API.Create for "force-systemd-env-286741" (driver="docker")
	I0323 23:26:39.280387  427158 client.go:168] LocalClient.Create starting
	I0323 23:26:39.280458  427158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem
	I0323 23:26:39.280507  427158 main.go:141] libmachine: Decoding PEM data...
	I0323 23:26:39.280530  427158 main.go:141] libmachine: Parsing certificate...
	I0323 23:26:39.280594  427158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem
	I0323 23:26:39.280623  427158 main.go:141] libmachine: Decoding PEM data...
	I0323 23:26:39.280640  427158 main.go:141] libmachine: Parsing certificate...
	I0323 23:26:39.280974  427158 cli_runner.go:164] Run: docker network inspect force-systemd-env-286741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0323 23:26:39.354615  427158 cli_runner.go:211] docker network inspect force-systemd-env-286741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0323 23:26:39.354704  427158 network_create.go:281] running [docker network inspect force-systemd-env-286741] to gather additional debugging logs...
	I0323 23:26:39.354728  427158 cli_runner.go:164] Run: docker network inspect force-systemd-env-286741
	W0323 23:26:39.425557  427158 cli_runner.go:211] docker network inspect force-systemd-env-286741 returned with exit code 1
	I0323 23:26:39.425596  427158 network_create.go:284] error running [docker network inspect force-systemd-env-286741]: docker network inspect force-systemd-env-286741: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-286741 not found
	I0323 23:26:39.425628  427158 network_create.go:286] output of [docker network inspect force-systemd-env-286741]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-286741 not found
	
	** /stderr **
	I0323 23:26:39.425680  427158 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0323 23:26:39.503698  427158 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5c8e73f5a026 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0a:b3:fe:c5} reservation:<nil>}
	I0323 23:26:39.504676  427158 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76643bda3762 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f7:a2:b3:ec} reservation:<nil>}
	I0323 23:26:39.505710  427158 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2400bfbdd9cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a7:a4:76:86} reservation:<nil>}
	I0323 23:26:39.506685  427158 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cd4e78a8bfb8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:a6:13:91:cb} reservation:<nil>}
	I0323 23:26:39.507885  427158 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00175a3d0}
	I0323 23:26:39.507923  427158 network_create.go:123] attempt to create docker network force-systemd-env-286741 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0323 23:26:39.507984  427158 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-286741 force-systemd-env-286741
	I0323 23:26:39.624494  427158 network_create.go:107] docker network force-systemd-env-286741 192.168.85.0/24 created
	I0323 23:26:39.624528  427158 kic.go:117] calculated static IP "192.168.85.2" for the "force-systemd-env-286741" container
	I0323 23:26:39.624580  427158 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0323 23:26:39.699198  427158 cli_runner.go:164] Run: docker volume create force-systemd-env-286741 --label name.minikube.sigs.k8s.io=force-systemd-env-286741 --label created_by.minikube.sigs.k8s.io=true
	I0323 23:26:39.772552  427158 oci.go:103] Successfully created a docker volume force-systemd-env-286741
	I0323 23:26:39.772640  427158 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-286741-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-286741 --entrypoint /usr/bin/test -v force-systemd-env-286741:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
	I0323 23:26:40.396101  427158 oci.go:107] Successfully prepared a docker volume force-systemd-env-286741
	I0323 23:26:40.396169  427158 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0323 23:26:40.396201  427158 kic.go:190] Starting extracting preloaded images to volume ...
	I0323 23:26:40.396283  427158 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-286741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
	I0323 23:26:43.652059  427158 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-286741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (3.255698579s)
	I0323 23:26:43.652098  427158 kic.go:199] duration metric: took 3.255892 seconds to extract preloaded images to volume
	W0323 23:26:43.652249  427158 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0323 23:26:43.652340  427158 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0323 23:26:43.788292  427158 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-286741 --name force-systemd-env-286741 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-286741 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-286741 --network force-systemd-env-286741 --ip 192.168.85.2 --volume force-systemd-env-286741:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
	I0323 23:26:40.542931  428061 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0323 23:26:40.543143  428061 start.go:159] libmachine.API.Create for "old-k8s-version-063647" (driver="docker")
	I0323 23:26:40.543161  428061 client.go:168] LocalClient.Create starting
	I0323 23:26:40.543233  428061 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem
	I0323 23:26:40.543267  428061 main.go:141] libmachine: Decoding PEM data...
	I0323 23:26:40.543291  428061 main.go:141] libmachine: Parsing certificate...
	I0323 23:26:40.543363  428061 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem
	I0323 23:26:40.543394  428061 main.go:141] libmachine: Decoding PEM data...
	I0323 23:26:40.543409  428061 main.go:141] libmachine: Parsing certificate...
	I0323 23:26:40.543830  428061 cli_runner.go:164] Run: docker network inspect old-k8s-version-063647 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0323 23:26:40.622688  428061 cli_runner.go:211] docker network inspect old-k8s-version-063647 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0323 23:26:40.622796  428061 network_create.go:281] running [docker network inspect old-k8s-version-063647] to gather additional debugging logs...
	I0323 23:26:40.622825  428061 cli_runner.go:164] Run: docker network inspect old-k8s-version-063647
	W0323 23:26:40.691850  428061 cli_runner.go:211] docker network inspect old-k8s-version-063647 returned with exit code 1
	I0323 23:26:40.691881  428061 network_create.go:284] error running [docker network inspect old-k8s-version-063647]: docker network inspect old-k8s-version-063647: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-063647 not found
	I0323 23:26:40.691895  428061 network_create.go:286] output of [docker network inspect old-k8s-version-063647]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-063647 not found
	
	** /stderr **
	I0323 23:26:40.691971  428061 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0323 23:26:40.769117  428061 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5c8e73f5a026 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0a:b3:fe:c5} reservation:<nil>}
	I0323 23:26:40.769965  428061 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76643bda3762 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f7:a2:b3:ec} reservation:<nil>}
	I0323 23:26:40.770928  428061 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2400bfbdd9cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a7:a4:76:86} reservation:<nil>}
	I0323 23:26:40.771945  428061 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cd4e78a8bfb8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:a6:13:91:cb} reservation:<nil>}
	I0323 23:26:40.773155  428061 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f79741dc633b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:e0:82:cf:7a} reservation:<nil>}
	I0323 23:26:40.774473  428061 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014e36b0}
	I0323 23:26:40.774511  428061 network_create.go:123] attempt to create docker network old-k8s-version-063647 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0323 23:26:40.774584  428061 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-063647 old-k8s-version-063647
	I0323 23:26:40.898151  428061 network_create.go:107] docker network old-k8s-version-063647 192.168.94.0/24 created
	I0323 23:26:40.898189  428061 kic.go:117] calculated static IP "192.168.94.2" for the "old-k8s-version-063647" container
	I0323 23:26:40.898268  428061 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0323 23:26:40.974566  428061 cli_runner.go:164] Run: docker volume create old-k8s-version-063647 --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --label created_by.minikube.sigs.k8s.io=true
	I0323 23:26:41.045122  428061 oci.go:103] Successfully created a docker volume old-k8s-version-063647
	I0323 23:26:41.045212  428061 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-063647-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --entrypoint /usr/bin/test -v old-k8s-version-063647:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
	I0323 23:26:44.069733  428061 cli_runner.go:217] Completed: docker run --rm --name old-k8s-version-063647-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --entrypoint /usr/bin/test -v old-k8s-version-063647:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib: (3.024480313s)
	I0323 23:26:44.069768  428061 oci.go:107] Successfully prepared a docker volume old-k8s-version-063647
	I0323 23:26:44.069781  428061 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0323 23:26:44.069803  428061 kic.go:190] Starting extracting preloaded images to volume ...
	I0323 23:26:44.069874  428061 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-063647:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
	I0323 23:26:43.946954  401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
	I0323 23:26:44.447057  401618 pod_ready.go:92] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:44.447087  401618 pod_ready.go:81] duration metric: took 7.011439342s waiting for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:44.447102  401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:44.452104  401618 pod_ready.go:92] pod "kube-apiserver-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:44.452122  401618 pod_ready.go:81] duration metric: took 5.012337ms waiting for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:44.452131  401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.154244  401618 pod_ready.go:92] pod "kube-controller-manager-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:45.154286  401618 pod_ready.go:81] duration metric: took 702.146362ms waiting for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.154300  401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.161861  401618 pod_ready.go:92] pod "kube-proxy-lnk2t" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:45.161889  401618 pod_ready.go:81] duration metric: took 7.580234ms waiting for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.161903  401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.166566  401618 pod_ready.go:92] pod "kube-scheduler-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:45.166596  401618 pod_ready.go:81] duration metric: took 4.684396ms waiting for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.166605  401618 pod_ready.go:38] duration metric: took 12.254811598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0323 23:26:45.166630  401618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0323 23:26:45.174654  401618 ops.go:34] apiserver oom_adj: -16
	I0323 23:26:45.174677  401618 kubeadm.go:637] restartCluster took 54.651125652s
	I0323 23:26:45.174685  401618 kubeadm.go:403] StartCluster complete in 54.678873105s
	I0323 23:26:45.174705  401618 settings.go:142] acquiring lock: {Name:mk2143e7b36672d551bcc6ff6483f31f704df2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0323 23:26:45.174775  401618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16143-62012/kubeconfig
	I0323 23:26:45.175905  401618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-62012/kubeconfig: {Name:mkedf19780b2d3cba14a58c9ca6a4f1d32104ee0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0323 23:26:45.213579  401618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0323 23:26:45.213933  401618 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:26:45.213472  401618 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0323 23:26:45.214148  401618 kapi.go:59] client config for pause-574316: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.key", CAFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0323 23:26:45.414715  401618 out.go:177] * Enabled addons: 
	I0323 23:26:45.217242  401618 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-574316" context rescaled to 1 replicas
	I0323 23:26:45.430053  401618 addons.go:499] enable addons completed in 216.595091ms: enabled=[]
	I0323 23:26:45.430069  401618 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0323 23:26:45.436198  401618 out.go:177] * Verifying Kubernetes components...
	I0323 23:26:41.512784  360910 logs.go:277] 1 containers: [333ad261cea4]
	I0323 23:26:41.580770  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0323 23:26:41.604486  360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
	I0323 23:26:41.604573  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0323 23:26:41.623789  360910 logs.go:277] 0 containers: []
	W0323 23:26:41.623821  360910 logs.go:279] No container was found matching "kindnet"
	I0323 23:26:41.623896  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0323 23:26:41.644226  360910 logs.go:277] 1 containers: [eac6b13c2df0]
	I0323 23:26:41.644272  360910 logs.go:123] Gathering logs for kubelet ...
	I0323 23:26:41.644288  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0323 23:26:41.748676  360910 logs.go:123] Gathering logs for dmesg ...
	I0323 23:26:41.748714  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0323 23:26:41.768332  360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
	I0323 23:26:41.768367  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
	I0323 23:26:41.792311  360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
	I0323 23:26:41.792341  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
	I0323 23:26:41.830521  360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
	I0323 23:26:41.830556  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
	I0323 23:26:41.860609  360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
	I0323 23:26:41.860650  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
	I0323 23:26:41.932251  360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
	I0323 23:26:41.932290  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
	I0323 23:26:41.963057  360910 logs.go:123] Gathering logs for container status ...
	I0323 23:26:41.963098  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0323 23:26:41.993699  360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
	I0323 23:26:41.993742  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
	I0323 23:26:42.025209  360910 logs.go:123] Gathering logs for Docker ...
	I0323 23:26:42.025243  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0323 23:26:42.056243  360910 logs.go:123] Gathering logs for describe nodes ...
	I0323 23:26:42.056283  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0323 23:26:42.128632  360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0323 23:26:42.128657  360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
	I0323 23:26:42.128672  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
	I0323 23:26:42.163262  360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
	I0323 23:26:42.163298  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
	I0323 23:26:42.188287  360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
	I0323 23:26:42.188316  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
	I0323 23:26:44.714609  360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0323 23:26:44.715050  360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0323 23:26:44.915428  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0323 23:26:44.936310  360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
	I0323 23:26:44.936415  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0323 23:26:44.957324  360910 logs.go:277] 1 containers: [a90d829451b2]
	I0323 23:26:44.957387  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0323 23:26:44.980654  360910 logs.go:277] 0 containers: []
	W0323 23:26:44.980682  360910 logs.go:279] No container was found matching "coredns"
	I0323 23:26:44.980734  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0323 23:26:45.003148  360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
	I0323 23:26:45.003234  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0323 23:26:45.022249  360910 logs.go:277] 1 containers: [333ad261cea4]
	I0323 23:26:45.022323  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0323 23:26:45.040205  360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
	I0323 23:26:45.040282  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0323 23:26:45.057312  360910 logs.go:277] 0 containers: []
	W0323 23:26:45.057337  360910 logs.go:279] No container was found matching "kindnet"
	I0323 23:26:45.057385  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0323 23:26:45.080434  360910 logs.go:277] 1 containers: [eac6b13c2df0]
	I0323 23:26:45.080479  360910 logs.go:123] Gathering logs for dmesg ...
	I0323 23:26:45.080495  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0323 23:26:45.104865  360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
	I0323 23:26:45.104918  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
	I0323 23:26:45.133666  360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
	I0323 23:26:45.133710  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
	I0323 23:26:45.162931  360910 logs.go:123] Gathering logs for container status ...
	I0323 23:26:45.162970  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0323 23:26:45.202791  360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
	I0323 23:26:45.202825  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
	I0323 23:26:45.244277  360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
	I0323 23:26:45.244379  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
	I0323 23:26:45.282659  360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
	I0323 23:26:45.282742  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
	I0323 23:26:45.313254  360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
	I0323 23:26:45.313334  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
	I0323 23:26:45.336545  360910 logs.go:123] Gathering logs for Docker ...
	I0323 23:26:45.336594  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0323 23:26:45.377128  360910 logs.go:123] Gathering logs for kubelet ...
	I0323 23:26:45.377170  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0323 23:26:45.514087  360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
	I0323 23:26:45.514205  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
	I0323 23:26:45.592082  360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
	I0323 23:26:45.592121  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
	I0323 23:26:45.619139  360910 logs.go:123] Gathering logs for describe nodes ...
	I0323 23:26:45.619172  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0323 23:26:45.678335  360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0323 23:26:45.678389  360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
	I0323 23:26:45.678404  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
	I0323 23:26:45.436358  401618 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0323 23:26:45.446881  401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0323 23:26:45.460908  401618 node_ready.go:35] waiting up to 6m0s for node "pause-574316" to be "Ready" ...
	I0323 23:26:45.463792  401618 node_ready.go:49] node "pause-574316" has status "Ready":"True"
	I0323 23:26:45.463814  401618 node_ready.go:38] duration metric: took 2.869699ms waiting for node "pause-574316" to be "Ready" ...
	I0323 23:26:45.463823  401618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0323 23:26:45.468648  401618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.645139  401618 pod_ready.go:92] pod "coredns-787d4945fb-lljqk" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:45.645160  401618 pod_ready.go:81] duration metric: took 176.488938ms waiting for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:45.645170  401618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.045231  401618 pod_ready.go:92] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:46.045260  401618 pod_ready.go:81] duration metric: took 400.083583ms waiting for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.045274  401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.444173  401618 pod_ready.go:92] pod "kube-apiserver-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:46.444194  401618 pod_ready.go:81] duration metric: took 398.912915ms waiting for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.444204  401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.844571  401618 pod_ready.go:92] pod "kube-controller-manager-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:46.844592  401618 pod_ready.go:81] duration metric: took 400.382744ms waiting for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:46.844602  401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:47.244514  401618 pod_ready.go:92] pod "kube-proxy-lnk2t" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:47.244538  401618 pod_ready.go:81] duration metric: took 399.927693ms waiting for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:47.244548  401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:47.644184  401618 pod_ready.go:92] pod "kube-scheduler-pause-574316" in "kube-system" namespace has status "Ready":"True"
	I0323 23:26:47.644203  401618 pod_ready.go:81] duration metric: took 399.648889ms waiting for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
	I0323 23:26:47.644210  401618 pod_ready.go:38] duration metric: took 2.180378997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0323 23:26:47.644231  401618 api_server.go:51] waiting for apiserver process to appear ...
	I0323 23:26:47.644265  401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0323 23:26:47.660462  401618 api_server.go:71] duration metric: took 2.230343116s to wait for apiserver process to appear ...
	I0323 23:26:47.660489  401618 api_server.go:87] waiting for apiserver healthz status ...
	I0323 23:26:47.660508  401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0323 23:26:47.667464  401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0323 23:26:47.668285  401618 api_server.go:140] control plane version: v1.26.3
	I0323 23:26:47.668303  401618 api_server.go:130] duration metric: took 7.807644ms to wait for apiserver health ...
	I0323 23:26:47.668310  401618 system_pods.go:43] waiting for kube-system pods to appear ...
	I0323 23:26:47.847116  401618 system_pods.go:59] 6 kube-system pods found
	I0323 23:26:47.847153  401618 system_pods.go:61] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running
	I0323 23:26:47.847161  401618 system_pods.go:61] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
	I0323 23:26:47.847168  401618 system_pods.go:61] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
	I0323 23:26:47.847175  401618 system_pods.go:61] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
	I0323 23:26:47.847181  401618 system_pods.go:61] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
	I0323 23:26:47.847187  401618 system_pods.go:61] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
	I0323 23:26:47.847193  401618 system_pods.go:74] duration metric: took 178.878592ms to wait for pod list to return data ...
	I0323 23:26:47.847201  401618 default_sa.go:34] waiting for default service account to be created ...
	I0323 23:26:48.044586  401618 default_sa.go:45] found service account: "default"
	I0323 23:26:48.044616  401618 default_sa.go:55] duration metric: took 197.409776ms for default service account to be created ...
	I0323 23:26:48.044630  401618 system_pods.go:116] waiting for k8s-apps to be running ...
	I0323 23:26:48.247931  401618 system_pods.go:86] 6 kube-system pods found
	I0323 23:26:48.247963  401618 system_pods.go:89] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running
	I0323 23:26:48.247974  401618 system_pods.go:89] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
	I0323 23:26:48.247980  401618 system_pods.go:89] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
	I0323 23:26:48.247986  401618 system_pods.go:89] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
	I0323 23:26:48.247991  401618 system_pods.go:89] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
	I0323 23:26:48.247999  401618 system_pods.go:89] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
	I0323 23:26:48.248007  401618 system_pods.go:126] duration metric: took 203.371205ms to wait for k8s-apps to be running ...
	I0323 23:26:48.248015  401618 system_svc.go:44] waiting for kubelet service to be running ....
	I0323 23:26:48.248065  401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0323 23:26:48.258927  401618 system_svc.go:56] duration metric: took 10.902515ms WaitForService to wait for kubelet.
	I0323 23:26:48.258954  401618 kubeadm.go:578] duration metric: took 2.828842444s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0323 23:26:48.258976  401618 node_conditions.go:102] verifying NodePressure condition ...
	I0323 23:26:48.449583  401618 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0323 23:26:48.449608  401618 node_conditions.go:123] node cpu capacity is 8
	I0323 23:26:48.449620  401618 node_conditions.go:105] duration metric: took 190.638556ms to run NodePressure ...
	I0323 23:26:48.449633  401618 start.go:228] waiting for startup goroutines ...
	I0323 23:26:48.449641  401618 start.go:233] waiting for cluster config update ...
	I0323 23:26:48.449652  401618 start.go:242] writing updated cluster config ...
	I0323 23:26:48.450019  401618 ssh_runner.go:195] Run: rm -f paused
	I0323 23:26:48.534780  401618 start.go:554] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0)
	I0323 23:26:48.538018  401618 out.go:177] * Done! kubectl is now configured to use "pause-574316" cluster and "default" namespace by default
	I0323 23:26:44.308331  427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Running}}
	I0323 23:26:44.394439  427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Status}}
	I0323 23:26:44.471392  427158 cli_runner.go:164] Run: docker exec force-systemd-env-286741 stat /var/lib/dpkg/alternatives/iptables
	I0323 23:26:44.603293  427158 oci.go:144] the created container "force-systemd-env-286741" has a running status.
	I0323 23:26:44.603330  427158 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa...
	I0323 23:26:44.920036  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0323 23:26:44.920082  427158 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0323 23:26:45.161321  427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Status}}
	I0323 23:26:45.251141  427158 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0323 23:26:45.251176  427158 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-286741 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0323 23:26:45.400052  427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Status}}
	I0323 23:26:45.485912  427158 machine.go:88] provisioning docker machine ...
	I0323 23:26:45.485973  427158 ubuntu.go:169] provisioning hostname "force-systemd-env-286741"
	I0323 23:26:45.486046  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:45.565967  427158 main.go:141] libmachine: Using SSH client type: native
	I0323 23:26:45.566601  427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I0323 23:26:45.566627  427158 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-286741 && echo "force-systemd-env-286741" | sudo tee /etc/hostname
	I0323 23:26:45.780316  427158 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-286741
	
	I0323 23:26:45.780413  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:45.856411  427158 main.go:141] libmachine: Using SSH client type: native
	I0323 23:26:45.857051  427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I0323 23:26:45.857097  427158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-286741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-286741/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-286741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0323 23:26:45.977892  427158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0323 23:26:45.977934  427158 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16143-62012/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-62012/.minikube}
	I0323 23:26:45.977978  427158 ubuntu.go:177] setting up certificates
	I0323 23:26:45.977996  427158 provision.go:83] configureAuth start
	I0323 23:26:45.978074  427158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-286741
	I0323 23:26:46.057572  427158 provision.go:138] copyHostCerts
	I0323 23:26:46.057625  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem
	I0323 23:26:46.057666  427158 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem, removing ...
	I0323 23:26:46.057678  427158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem
	I0323 23:26:46.057752  427158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem (1078 bytes)
	I0323 23:26:46.057846  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem
	I0323 23:26:46.057875  427158 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem, removing ...
	I0323 23:26:46.057885  427158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem
	I0323 23:26:46.057920  427158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem (1123 bytes)
	I0323 23:26:46.057987  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem
	I0323 23:26:46.058014  427158 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem, removing ...
	I0323 23:26:46.058025  427158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem
	I0323 23:26:46.058056  427158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem (1675 bytes)
	I0323 23:26:46.058133  427158 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-286741 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-env-286741]
	I0323 23:26:46.508497  427158 provision.go:172] copyRemoteCerts
	I0323 23:26:46.508591  427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0323 23:26:46.508655  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:46.583159  427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
	I0323 23:26:46.668948  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0323 23:26:46.669009  427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0323 23:26:46.687152  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0323 23:26:46.687222  427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
	I0323 23:26:46.706760  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0323 23:26:46.706834  427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0323 23:26:46.724180  427158 provision.go:86] duration metric: configureAuth took 746.155987ms
	I0323 23:26:46.724211  427158 ubuntu.go:193] setting minikube options for container-runtime
	I0323 23:26:46.724415  427158 config.go:182] Loaded profile config "force-systemd-env-286741": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:26:46.724478  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:46.793992  427158 main.go:141] libmachine: Using SSH client type: native
	I0323 23:26:46.794421  427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I0323 23:26:46.794437  427158 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0323 23:26:46.909667  427158 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0323 23:26:46.909696  427158 ubuntu.go:71] root file system type: overlay
	I0323 23:26:46.909827  427158 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0323 23:26:46.909896  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:46.979665  427158 main.go:141] libmachine: Using SSH client type: native
	I0323 23:26:46.980533  427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I0323 23:26:46.980649  427158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0323 23:26:47.134741  427158 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0323 23:26:47.134814  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:47.203471  427158 main.go:141] libmachine: Using SSH client type: native
	I0323 23:26:47.203895  427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 33004 <nil> <nil>}
	I0323 23:26:47.203914  427158 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0323 23:26:47.958910  427158 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-23 23:26:47.129506351 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0323 23:26:47.958955  427158 machine.go:91] provisioned docker machine in 2.473006765s
	I0323 23:26:47.958969  427158 client.go:171] LocalClient.Create took 8.678571965s
	I0323 23:26:47.958985  427158 start.go:167] duration metric: libmachine.API.Create for "force-systemd-env-286741" took 8.67862836s
	I0323 23:26:47.959002  427158 start.go:300] post-start starting for "force-systemd-env-286741" (driver="docker")
	I0323 23:26:47.959010  427158 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0323 23:26:47.959086  427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0323 23:26:47.959133  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:48.039006  427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
	I0323 23:26:48.138241  427158 ssh_runner.go:195] Run: cat /etc/os-release
	I0323 23:26:48.141753  427158 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0323 23:26:48.141790  427158 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0323 23:26:48.141804  427158 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0323 23:26:48.141812  427158 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0323 23:26:48.141823  427158 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-62012/.minikube/addons for local assets ...
	I0323 23:26:48.141882  427158 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-62012/.minikube/files for local assets ...
	I0323 23:26:48.141972  427158 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem -> 687022.pem in /etc/ssl/certs
	I0323 23:26:48.141981  427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem -> /etc/ssl/certs/687022.pem
	I0323 23:26:48.142083  427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0323 23:26:48.149479  427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem --> /etc/ssl/certs/687022.pem (1708 bytes)
	I0323 23:26:48.170718  427158 start.go:303] post-start completed in 211.698395ms
	I0323 23:26:48.171159  427158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-286741
	I0323 23:26:48.255406  427158 profile.go:148] Saving config to /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/force-systemd-env-286741/config.json ...
	I0323 23:26:48.255709  427158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0323 23:26:48.255768  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:48.348731  427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
	I0323 23:26:48.444848  427158 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0323 23:26:48.454096  427158 start.go:128] duration metric: createHost completed in 9.176760391s
	I0323 23:26:48.454122  427158 start.go:83] releasing machines lock for "force-systemd-env-286741", held for 9.176923746s
	I0323 23:26:48.454203  427158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-286741
	I0323 23:26:48.544171  427158 ssh_runner.go:195] Run: cat /version.json
	I0323 23:26:48.544227  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:48.544232  427158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0323 23:26:48.544306  427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
	I0323 23:26:48.702573  427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
	I0323 23:26:48.713344  427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
	I0323 23:26:48.792996  427158 ssh_runner.go:195] Run: systemctl --version
	I0323 23:26:47.250761  428061 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-063647:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (3.180840905s)
	I0323 23:26:47.250789  428061 kic.go:199] duration metric: took 3.180984 seconds to extract preloaded images to volume
	W0323 23:26:47.250903  428061 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0323 23:26:47.250984  428061 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0323 23:26:47.383772  428061 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-063647 --name old-k8s-version-063647 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-063647 --network old-k8s-version-063647 --ip 192.168.94.2 --volume old-k8s-version-063647:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
	I0323 23:26:47.858547  428061 cli_runner.go:164] Run: docker container inspect old-k8s-version-063647 --format={{.State.Running}}
	I0323 23:26:47.933060  428061 cli_runner.go:164] Run: docker container inspect old-k8s-version-063647 --format={{.State.Status}}
	I0323 23:26:48.018265  428061 cli_runner.go:164] Run: docker exec old-k8s-version-063647 stat /var/lib/dpkg/alternatives/iptables
	I0323 23:26:48.141026  428061 oci.go:144] the created container "old-k8s-version-063647" has a running status.
	I0323 23:26:48.141055  428061 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/old-k8s-version-063647/id_rsa...
	I0323 23:26:48.262302  428061 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16143-62012/.minikube/machines/old-k8s-version-063647/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0323 23:26:48.410628  428061 cli_runner.go:164] Run: docker container inspect old-k8s-version-063647 --format={{.State.Status}}
	I0323 23:26:48.521228  428061 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0323 23:26:48.521255  428061 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-063647 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0323 23:26:48.710098  428061 cli_runner.go:164] Run: docker container inspect old-k8s-version-063647 --format={{.State.Status}}
	I0323 23:26:48.794908  428061 machine.go:88] provisioning docker machine ...
	I0323 23:26:48.794950  428061 ubuntu.go:169] provisioning hostname "old-k8s-version-063647"
	I0323 23:26:48.795019  428061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-063647
	I0323 23:26:48.888641  428061 main.go:141] libmachine: Using SSH client type: native
	I0323 23:26:48.889333  428061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 33009 <nil> <nil>}
	I0323 23:26:48.889367  428061 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-063647 && echo "old-k8s-version-063647" | sudo tee /etc/hostname
	I0323 23:26:49.019472  428061 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-063647
	
	I0323 23:26:49.019553  428061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-063647
	I0323 23:26:49.106103  428061 main.go:141] libmachine: Using SSH client type: native
	I0323 23:26:49.106769  428061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 33009 <nil> <nil>}
	I0323 23:26:49.106808  428061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-063647' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-063647/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-063647' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0323 23:26:49.229809  428061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0323 23:26:49.229846  428061 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16143-62012/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-62012/.minikube}
	I0323 23:26:49.229893  428061 ubuntu.go:177] setting up certificates
	I0323 23:26:49.229909  428061 provision.go:83] configureAuth start
	I0323 23:26:49.229969  428061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-063647
	I0323 23:26:49.322026  428061 provision.go:138] copyHostCerts
	I0323 23:26:49.322105  428061 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem, removing ...
	I0323 23:26:49.322114  428061 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem
	I0323 23:26:49.322170  428061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem (1078 bytes)
	I0323 23:26:49.322240  428061 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem, removing ...
	I0323 23:26:49.322244  428061 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem
	I0323 23:26:49.322265  428061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem (1123 bytes)
	I0323 23:26:49.322332  428061 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem, removing ...
	I0323 23:26:49.322337  428061 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem
	I0323 23:26:49.322355  428061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem (1675 bytes)
	I0323 23:26:49.322394  428061 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-063647 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-063647]
	I0323 23:26:49.564453  428061 provision.go:172] copyRemoteCerts
	I0323 23:26:49.564520  428061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0323 23:26:49.564557  428061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-063647
	I0323 23:26:49.641965  428061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33009 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/old-k8s-version-063647/id_rsa Username:docker}
	I0323 23:26:49.733721  428061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0323 23:26:49.753158  428061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0323 23:26:49.771013  428061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0323 23:26:49.791203  428061 provision.go:86] duration metric: configureAuth took 561.272972ms
	I0323 23:26:49.791234  428061 ubuntu.go:193] setting minikube options for container-runtime
	I0323 23:26:49.791439  428061 config.go:182] Loaded profile config "old-k8s-version-063647": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0323 23:26:49.791508  428061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-063647
	I0323 23:26:49.870915  428061 main.go:141] libmachine: Using SSH client type: native
	I0323 23:26:49.871642  428061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil>  [] 0s} 127.0.0.1 33009 <nil> <nil>}
	I0323 23:26:49.871668  428061 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0323 23:26:49.989937  428061 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0323 23:26:49.989968  428061 ubuntu.go:71] root file system type: overlay
	I0323 23:26:49.990126  428061 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0323 23:26:49.990208  428061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-063647
	I0323 23:26:48.836687  427158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0323 23:26:48.841606  427158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0323 23:26:48.864185  427158 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0323 23:26:48.864266  427158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0323 23:26:48.881822  427158 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0323 23:26:48.881849  427158 start.go:481] detecting cgroup driver to use...
	I0323 23:26:48.881869  427158 start.go:485] using "systemd" cgroup driver as enforced via flags
	I0323 23:26:48.881966  427158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0323 23:26:48.898313  427158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0323 23:26:48.907494  427158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0323 23:26:48.917456  427158 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
	I0323 23:26:48.917564  427158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0323 23:26:48.927215  427158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0323 23:26:48.935905  427158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0323 23:26:48.944134  427158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0323 23:26:48.952334  427158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0323 23:26:48.959676  427158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0323 23:26:48.971410  427158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0323 23:26:48.979151  427158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0323 23:26:48.986222  427158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0323 23:26:49.087285  427158 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0323 23:26:49.172432  427158 start.go:481] detecting cgroup driver to use...
	I0323 23:26:49.172458  427158 start.go:485] using "systemd" cgroup driver as enforced via flags
	I0323 23:26:49.172498  427158 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0323 23:26:49.187707  427158 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0323 23:26:49.187768  427158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0323 23:26:49.201896  427158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0323 23:26:49.216844  427158 ssh_runner.go:195] Run: which cri-dockerd
	I0323 23:26:49.220616  427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0323 23:26:49.229950  427158 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0323 23:26:49.265586  427158 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0323 23:26:49.361874  427158 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0323 23:26:49.456508  427158 docker.go:538] configuring docker to use "systemd" as cgroup driver...
	I0323 23:26:49.456538  427158 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes)
	I0323 23:26:49.472752  427158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0323 23:26:49.573497  427158 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0323 23:26:49.819861  427158 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0323 23:26:49.905025  427158 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0323 23:26:49.985735  427158 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0323 23:26:50.077462  427158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0323 23:26:50.164000  427158 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0323 23:26:50.176212  427158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0323 23:26:50.272787  427158 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0323 23:26:50.342739  427158 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0323 23:26:50.342814  427158 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0323 23:26:50.346501  427158 start.go:549] Will wait 60s for crictl version
	I0323 23:26:50.346550  427158 ssh_runner.go:195] Run: which crictl
	I0323 23:26:50.349633  427158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0323 23:26:50.381308  427158 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0323 23:26:50.381358  427158 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0323 23:26:50.408501  427158 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0323 23:26:48.219558  360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0323 23:26:48.219977  360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0323 23:26:48.415263  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0323 23:26:48.444393  360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
	I0323 23:26:48.444484  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0323 23:26:48.481867  360910 logs.go:277] 1 containers: [a90d829451b2]
	I0323 23:26:48.481950  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0323 23:26:48.503185  360910 logs.go:277] 0 containers: []
	W0323 23:26:48.503207  360910 logs.go:279] No container was found matching "coredns"
	I0323 23:26:48.503253  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0323 23:26:48.526729  360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
	I0323 23:26:48.526806  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0323 23:26:48.553782  360910 logs.go:277] 1 containers: [333ad261cea4]
	I0323 23:26:48.553860  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0323 23:26:48.582449  360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
	I0323 23:26:48.582541  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0323 23:26:48.606619  360910 logs.go:277] 0 containers: []
	W0323 23:26:48.606650  360910 logs.go:279] No container was found matching "kindnet"
	I0323 23:26:48.606712  360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0323 23:26:48.638702  360910 logs.go:277] 1 containers: [eac6b13c2df0]
	I0323 23:26:48.638756  360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
	I0323 23:26:48.638773  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
	I0323 23:26:48.711513  360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
	I0323 23:26:48.711551  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
	I0323 23:26:48.740243  360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
	I0323 23:26:48.740273  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
	I0323 23:26:48.792520  360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
	I0323 23:26:48.792567  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
	I0323 23:26:48.891767  360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
	I0323 23:26:48.891809  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
	I0323 23:26:48.914706  360910 logs.go:123] Gathering logs for Docker ...
	I0323 23:26:48.914738  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0323 23:26:48.954192  360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
	I0323 23:26:48.954221  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
	I0323 23:26:48.976760  360910 logs.go:123] Gathering logs for kubelet ...
	I0323 23:26:48.976795  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0323 23:26:49.093786  360910 logs.go:123] Gathering logs for describe nodes ...
	I0323 23:26:49.093831  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0323 23:26:49.162630  360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0323 23:26:49.162655  360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
	I0323 23:26:49.162670  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
	I0323 23:26:49.195807  360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
	I0323 23:26:49.195851  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
	I0323 23:26:49.238251  360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
	I0323 23:26:49.238286  360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
	I0323 23:26:49.275913  360910 logs.go:123] Gathering logs for dmesg ...
	I0323 23:26:49.276000  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0323 23:26:49.302877  360910 logs.go:123] Gathering logs for container status ...
	I0323 23:26:49.302974  360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-03-23 23:25:05 UTC, end at Thu 2023-03-23 23:26:52 UTC. --
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.002500928Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.002674094Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.002709828Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.003286601Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.025479889Z" level=info msg="Loading containers: start."
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.172830226Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.214010134Z" level=info msg="Loading containers: done."
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.225800214Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.225888560Z" level=info msg="Daemon has completed initialization"
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.240113456Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 23 23:25:49 pause-574316 systemd[1]: Started Docker Application Container Engine.
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.246358737Z" level=info msg="API listen on [::]:2376"
	Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.256115277Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 23 23:26:11 pause-574316 dockerd[5186]: time="2023-03-23T23:26:11.796102440Z" level=info msg="ignoring event" container=6a198df97e4bd33611868552786c34b16a1896b4a18709ad6eaa65e7486b5d20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.145302003Z" level=info msg="ignoring event" container=45416a5cd36b4138409f0bf454eb922e1d3369a86ce1c0c803f7da26778cf7f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.379532489Z" level=info msg="ignoring event" container=60c1dee0f1786db1b413aa688e7a57acd71e6c18979e95b21131d3496a98cad8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.392985764Z" level=info msg="ignoring event" container=840b0c35d4448d1362a7bc020e0fac35331ad72438dfc00e79685e0baca6b11b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.453179245Z" level=info msg="ignoring event" container=656b70fafbc2b7e6611131272fea7433846a18987047e3c8d2e446e8b5290cce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.457378879Z" level=info msg="ignoring event" container=f70a37494730e3c42d183c94cd69472a7f672f61f330f75482164f78d4eda989 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.459285840Z" level=info msg="ignoring event" container=2b7bc2ac835be2dc569bede97afe45c6357e58e4e23f23539dc1433d3a84bedc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.460667173Z" level=info msg="ignoring event" container=d517e8e4d5d2dbd1822c028a0de7f091686d0e0657198f93573dd122ee6485a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.460699514Z" level=info msg="ignoring event" container=4b1c73f39f8c07193f987da6a6d6784c9f87cb43caa7ea5f424e367b0f2e27e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.467741307Z" level=info msg="ignoring event" container=80c388522552702a89135b09d2d073b9c57d1fbc851a0a89b0cec032be049f71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.471167750Z" level=info msg="ignoring event" container=7ff3dcd747a3b0f733eda143cf5993de0d0e1afd3dbd1b2b2f9a8fd3dbea2be9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 23 23:26:25 pause-574316 dockerd[5186]: time="2023-03-23T23:26:25.347736368Z" level=info msg="ignoring event" container=a9b1dc3910d9b5195bfff4b0d6cedbf54b214159654d4e23645c839bf053ad23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	0f0398bddb511       5185b96f0becf       20 seconds ago       Running             coredns                   3                   542477f9c5e1d
	43a8930300a57       92ed2bec97a63       20 seconds ago       Running             kube-proxy                2                   28a061395dad5
	e7cd8ca7c7242       5a79047369329       25 seconds ago       Running             kube-scheduler            3                   4c131416edb23
	f946ab43717f1       ce8c2293ef09c       25 seconds ago       Running             kube-controller-manager   3                   3ca9ec9bef2c4
	1137111a33d08       fce326961ae2d       25 seconds ago       Running             etcd                      3                   f4e9af6f99313
	cea7ca7eb9ad0       1d9b3cbae03ce       30 seconds ago       Running             kube-apiserver            2                   f84cdf335e887
	656b70fafbc2b       fce326961ae2d       41 seconds ago       Exited              etcd                      2                   60c1dee0f1786
	2b7bc2ac835be       5a79047369329       52 seconds ago       Exited              kube-scheduler            2                   4b1c73f39f8c0
	7ff3dcd747a3b       92ed2bec97a63       53 seconds ago       Exited              kube-proxy                1                   d517e8e4d5d2d
	45416a5cd36b4       ce8c2293ef09c       53 seconds ago       Exited              kube-controller-manager   2                   f70a37494730e
	a9b1dc3910d9b       5185b96f0becf       About a minute ago   Exited              coredns                   2                   840b0c35d4448
	6a198df97e4bd       1d9b3cbae03ce       About a minute ago   Exited              kube-apiserver            1                   80c3885225527
	
	* 
	* ==> coredns [0f0398bddb51] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:52573 - 39862 "HINFO IN 4074527240347548607.320685648437704123. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.037884079s
	
	* 
	* ==> coredns [a9b1dc3910d9] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:45219 - 2821 "HINFO IN 6139167459808748397.3590652508084774261. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035135004s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-574316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-574316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9478c9159ab3ccef5e7f933edc25c8da75bed69
	                    minikube.k8s.io/name=pause-574316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_03_23T23_25_21_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Mar 2023 23:25:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-574316
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Mar 2023 23:26:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Mar 2023 23:26:30 +0000   Thu, 23 Mar 2023 23:25:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Mar 2023 23:26:30 +0000   Thu, 23 Mar 2023 23:25:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Mar 2023 23:26:30 +0000   Thu, 23 Mar 2023 23:25:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Mar 2023 23:26:30 +0000   Thu, 23 Mar 2023 23:25:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-574316
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 b249c14bbd9147e887f6315aff00ef06
	  System UUID:                7bdff168-7cdd-493c-bdda-f1cc26739b6e
	  Boot ID:                    9d192f19-d9f5-4df3-a502-4030f2da5343
	  Kernel Version:             5.15.0-1030-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.3
	  Kube-Proxy Version:         v1.26.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-lljqk                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     78s
	  kube-system                 etcd-pause-574316                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         90s
	  kube-system                 kube-apiserver-pause-574316             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-pause-574316    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-lnk2t                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-pause-574316             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 78s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasSufficientPID     98s (x3 over 98s)  kubelet          Node pause-574316 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    98s (x4 over 98s)  kubelet          Node pause-574316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  98s (x4 over 98s)  kubelet          Node pause-574316 status is now: NodeHasSufficientMemory
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  91s                kubelet          Node pause-574316 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s                kubelet          Node pause-574316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s                kubelet          Node pause-574316 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             91s                kubelet          Node pause-574316 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                80s                kubelet          Node pause-574316 status is now: NodeReady
	  Normal  RegisteredNode           79s                node-controller  Node pause-574316 event: Registered Node pause-574316 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-574316 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-574316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-574316 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-574316 event: Registered Node pause-574316 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000619] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff da 9a 31 26 91 58 08 06
	[ +46.489619] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 03 7b bf b1 b8 08 06
	[Mar23 23:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 12 06 3d f3 17 47 08 06
	[Mar23 23:21] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0e 93 92 d3 0d 7e 08 06
	[  +0.437885] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 93 92 d3 0d 7e 08 06
	[Mar23 23:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 9e 53 5f 42 99 08 06
	[Mar23 23:23] process 'docker/tmp/qemu-check941714971/check' started with executable stack
	[  +9.389883] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e f3 36 2c c1 cd 08 06
	[Mar23 23:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae cb 28 07 13 77 08 06
	[  +0.012995] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 0c 92 4c a9 1c 08 06
	[ +15.547404] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 10 ab 83 31 f9 08 06
	[Mar23 23:26] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 20 81 ad 5c b9 08 06
	[  +5.887427] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 6b a8 e3 05 d7 08 06
	
	* 
	* ==> etcd [1137111a33d0] <==
	* {"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 5"}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 5"}
	{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-574316 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-23T23:26:29.060Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-23T23:26:29.061Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-23T23:26:29.061Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-03-23T23:26:45.146Z","caller":"traceutil/trace.go:171","msg":"trace[1088553463] linearizableReadLoop","detail":"{readStateIndex:500; appliedIndex:499; }","duration":"187.629875ms","start":"2023-03-23T23:26:44.958Z","end":"2023-03-23T23:26:45.146Z","steps":["trace[1088553463] 'read index received'  (duration: 113.126176ms)","trace[1088553463] 'applied index is now lower than readState.Index'  (duration: 74.502878ms)"],"step_count":2}
	{"level":"info","ts":"2023-03-23T23:26:45.146Z","caller":"traceutil/trace.go:171","msg":"trace[1657399943] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"197.637334ms","start":"2023-03-23T23:26:44.948Z","end":"2023-03-23T23:26:45.146Z","steps":["trace[1657399943] 'process raft request'  (duration: 123.099553ms)","trace[1657399943] 'compare'  (duration: 74.347233ms)"],"step_count":2}
	{"level":"warn","ts":"2023-03-23T23:26:45.146Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"187.827176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-574316\" ","response":"range_response_count:1 size:6942"}
	{"level":"info","ts":"2023-03-23T23:26:45.146Z","caller":"traceutil/trace.go:171","msg":"trace[666014890] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-574316; range_end:; response_count:1; response_revision:463; }","duration":"187.950429ms","start":"2023-03-23T23:26:44.958Z","end":"2023-03-23T23:26:45.146Z","steps":["trace[666014890] 'agreement among raft nodes before linearized reading'  (duration: 187.770048ms)"],"step_count":1}
	{"level":"warn","ts":"2023-03-23T23:26:45.429Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"133.41564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:764"}
	{"level":"info","ts":"2023-03-23T23:26:45.429Z","caller":"traceutil/trace.go:171","msg":"trace[1689761979] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:463; }","duration":"133.510104ms","start":"2023-03-23T23:26:45.295Z","end":"2023-03-23T23:26:45.429Z","steps":["trace[1689761979] 'range keys from in-memory index tree'  (duration: 133.250873ms)"],"step_count":1}
	
	* 
	* ==> etcd [656b70fafbc2] <==
	* {"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-574316 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-23T23:26:14.577Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-03-23T23:26:14.577Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-23T23:26:20.377Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-03-23T23:26:20.377Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-574316","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"info","ts":"2023-03-23T23:26:20.380Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-03-23T23:26:20.382Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-23T23:26:20.384Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-23T23:26:20.384Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-574316","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  23:26:52 up  2:09,  0 users,  load average: 5.17, 4.13, 2.82
	Linux pause-574316 5.15.0-1030-gcp #37~20.04.1-Ubuntu SMP Mon Feb 20 04:30:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [6a198df97e4b] <==
	* W0323 23:26:08.603014       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0323 23:26:09.405661       1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0323 23:26:09.657900       1 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0323 23:26:11.774251       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-apiserver [cea7ca7eb9ad] <==
	* I0323 23:26:30.648351       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0323 23:26:30.648430       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0323 23:26:30.684300       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0323 23:26:30.639853       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0323 23:26:30.639867       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0323 23:26:30.639933       1 autoregister_controller.go:141] Starting autoregister controller
	I0323 23:26:30.690081       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0323 23:26:30.690161       1 cache.go:39] Caches are synced for autoregister controller
	I0323 23:26:30.701389       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0323 23:26:30.750507       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0323 23:26:30.750975       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0323 23:26:30.752373       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0323 23:26:30.752385       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0323 23:26:30.752497       1 shared_informer.go:280] Caches are synced for configmaps
	I0323 23:26:30.753570       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0323 23:26:30.753615       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0323 23:26:31.339987       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0323 23:26:31.646840       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0323 23:26:32.375391       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0323 23:26:32.388141       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0323 23:26:32.474747       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0323 23:26:32.557448       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0323 23:26:32.566478       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0323 23:26:43.845098       1 controller.go:615] quota admission added evaluator for: endpoints
	I0323 23:26:43.899216       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [45416a5cd36b] <==
	* I0323 23:25:59.829591       1 serving.go:348] Generated self-signed cert in-memory
	I0323 23:26:00.084118       1 controllermanager.go:182] Version: v1.26.3
	I0323 23:26:00.084152       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0323 23:26:00.085310       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0323 23:26:00.085306       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0323 23:26:00.085554       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0323 23:26:00.085646       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	F0323 23:26:20.087377       1 controllermanager.go:228] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	
	* 
	* ==> kube-controller-manager [f946ab43717f] <==
	* I0323 23:26:43.682858       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0323 23:26:43.685481       1 shared_informer.go:280] Caches are synced for GC
	I0323 23:26:43.691799       1 shared_informer.go:280] Caches are synced for HPA
	I0323 23:26:43.691846       1 shared_informer.go:280] Caches are synced for daemon sets
	I0323 23:26:43.691921       1 shared_informer.go:280] Caches are synced for PVC protection
	I0323 23:26:43.691962       1 shared_informer.go:280] Caches are synced for endpoint
	I0323 23:26:43.692814       1 shared_informer.go:280] Caches are synced for ephemeral
	I0323 23:26:43.692841       1 shared_informer.go:280] Caches are synced for cronjob
	I0323 23:26:43.692907       1 shared_informer.go:280] Caches are synced for service account
	I0323 23:26:43.696646       1 shared_informer.go:280] Caches are synced for taint
	I0323 23:26:43.696746       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	I0323 23:26:43.696779       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	W0323 23:26:43.696843       1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-574316. Assuming now as a timestamp.
	I0323 23:26:43.696884       1 taint_manager.go:211] "Sending events to api server"
	I0323 23:26:43.696913       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0323 23:26:43.697076       1 event.go:294] "Event occurred" object="pause-574316" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-574316 event: Registered Node pause-574316 in Controller"
	I0323 23:26:43.698625       1 shared_informer.go:280] Caches are synced for crt configmap
	I0323 23:26:43.701545       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0323 23:26:43.740889       1 shared_informer.go:280] Caches are synced for attach detach
	I0323 23:26:43.792552       1 shared_informer.go:280] Caches are synced for disruption
	I0323 23:26:43.821372       1 shared_informer.go:280] Caches are synced for resource quota
	I0323 23:26:43.894489       1 shared_informer.go:280] Caches are synced for resource quota
	I0323 23:26:44.210014       1 shared_informer.go:280] Caches are synced for garbage collector
	I0323 23:26:44.229157       1 shared_informer.go:280] Caches are synced for garbage collector
	I0323 23:26:44.229247       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [43a8930300a5] <==
	* I0323 23:26:32.502821       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0323 23:26:32.502919       1 server_others.go:109] "Detected node IP" address="192.168.67.2"
	I0323 23:26:32.503040       1 server_others.go:535] "Using iptables proxy"
	I0323 23:26:32.581352       1 server_others.go:176] "Using iptables Proxier"
	I0323 23:26:32.581492       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0323 23:26:32.581507       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0323 23:26:32.581525       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0323 23:26:32.581580       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0323 23:26:32.582126       1 server.go:655] "Version info" version="v1.26.3"
	I0323 23:26:32.582166       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0323 23:26:32.582886       1 config.go:226] "Starting endpoint slice config controller"
	I0323 23:26:32.583504       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0323 23:26:32.583082       1 config.go:317] "Starting service config controller"
	I0323 23:26:32.583523       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0323 23:26:32.583137       1 config.go:444] "Starting node config controller"
	I0323 23:26:32.583545       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0323 23:26:32.684533       1 shared_informer.go:280] Caches are synced for service config
	I0323 23:26:32.684613       1 shared_informer.go:280] Caches are synced for node config
	I0323 23:26:32.684623       1 shared_informer.go:280] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [7ff3dcd747a3] <==
	* E0323 23:26:09.977748       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": net/http: TLS handshake timeout
	E0323 23:26:12.783360       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": dial tcp 192.168.67.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.67.2:39882->192.168.67.2:8443: read: connection reset by peer
	E0323 23:26:14.853949       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:18.965897       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": dial tcp 192.168.67.2:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [2b7bc2ac835b] <==
	* W0323 23:26:16.679162       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:16.679200       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:16.812219       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:16.812268       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:16.846940       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:16.846981       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:17.007369       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:17.007406       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:19.575702       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:19.575741       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:19.775890       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:19.775937       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:19.850977       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:19.851021       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:20.060721       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:20.060762       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:20.080470       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:20.080525       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0323 23:26:20.208535       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0323 23:26:20.208595       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	I0323 23:26:20.353988       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0323 23:26:20.354103       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0323 23:26:20.354167       1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0323 23:26:20.354182       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0323 23:26:20.354209       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [e7cd8ca7c724] <==
	* I0323 23:26:28.403386       1 serving.go:348] Generated self-signed cert in-memory
	I0323 23:26:30.771476       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.3"
	I0323 23:26:30.771503       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0323 23:26:30.778353       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0323 23:26:30.778381       1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0323 23:26:30.778428       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0323 23:26:30.778441       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0323 23:26:30.778478       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0323 23:26:30.778489       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0323 23:26:30.779761       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0323 23:26:30.784753       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0323 23:26:30.878975       1 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController
	I0323 23:26:30.879041       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0323 23:26:30.878980       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-03-23 23:25:05 UTC, end at Thu 2023-03-23 23:26:53 UTC. --
	Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.503080    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16bcc950c7983e1395e2f1091ca3b040-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-574316\" (UID: \"16bcc950c7983e1395e2f1091ca3b040\") " pod="kube-system/kube-controller-manager-pause-574316"
	Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.748833    7640 scope.go:115] "RemoveContainer" containerID="656b70fafbc2b7e6611131272fea7433846a18987047e3c8d2e446e8b5290cce"
	Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.763712    7640 scope.go:115] "RemoveContainer" containerID="45416a5cd36b4138409f0bf454eb922e1d3369a86ce1c0c803f7da26778cf7f4"
	Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.773578    7640 scope.go:115] "RemoveContainer" containerID="2b7bc2ac835be2dc569bede97afe45c6357e58e4e23f23539dc1433d3a84bedc"
	Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.818789    7640 kubelet_node_status.go:108] "Node was previously registered" node="pause-574316"
	Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.819442    7640 kubelet_node_status.go:73] "Successfully registered node" node="pause-574316"
	Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.821124    7640 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.827327    7640 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.062727    7640 apiserver.go:52] "Watching apiserver"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.069251    7640 topology_manager.go:210] "Topology Admit Handler"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.069369    7640 topology_manager.go:210] "Topology Admit Handler"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.069450    7640 topology_manager.go:210] "Topology Admit Handler"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.098738    7640 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.160848    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxzp5\" (UniqueName: \"kubernetes.io/projected/aeba9090-2690-42e1-8439-a0cd55ada6d0-kube-api-access-kxzp5\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.160919    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wm5m\" (UniqueName: \"kubernetes.io/projected/ce593e1c-39de-4a21-994e-157f74ab568e-kube-api-access-8wm5m\") pod \"coredns-787d4945fb-lljqk\" (UID: \"ce593e1c-39de-4a21-994e-157f74ab568e\") " pod="kube-system/coredns-787d4945fb-lljqk"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.160966    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeba9090-2690-42e1-8439-a0cd55ada6d0-lib-modules\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161002    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce593e1c-39de-4a21-994e-157f74ab568e-config-volume\") pod \"coredns-787d4945fb-lljqk\" (UID: \"ce593e1c-39de-4a21-994e-157f74ab568e\") " pod="kube-system/coredns-787d4945fb-lljqk"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161027    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aeba9090-2690-42e1-8439-a0cd55ada6d0-kube-proxy\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161059    7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeba9090-2690-42e1-8439-a0cd55ada6d0-xtables-lock\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
	Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161088    7640 reconciler.go:41] "Reconciler: start to sync state"
	Mar 23 23:26:32 pause-574316 kubelet[7640]: I0323 23:26:32.271414    7640 scope.go:115] "RemoveContainer" containerID="7ff3dcd747a3b0f733eda143cf5993de0d0e1afd3dbd1b2b2f9a8fd3dbea2be9"
	Mar 23 23:26:32 pause-574316 kubelet[7640]: I0323 23:26:32.700707    7640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="542477f9c5e1de564352e093d277e29ea04f9ada02cdebe4924d534ea2be3623"
	Mar 23 23:26:34 pause-574316 kubelet[7640]: I0323 23:26:34.734860    7640 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Mar 23 23:26:35 pause-574316 kubelet[7640]: I0323 23:26:35.343216    7640 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=05fc3b9f-534f-4c25-ab9a-0f1ea4cb9014 path="/var/lib/kubelet/pods/05fc3b9f-534f-4c25-ab9a-0f1ea4cb9014/volumes"
	Mar 23 23:26:37 pause-574316 kubelet[7640]: I0323 23:26:37.006845    7640 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-574316 -n pause-574316
E0323 23:26:53.591311   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context pause-574316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (75.12s)

                                                
                                    

Test pass (290/313)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.11
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.26.3/json-events 4.43
11 TestDownloadOnly/v1.26.3/preload-exists 0
15 TestDownloadOnly/v1.26.3/LogsDuration 0.06
17 TestDownloadOnly/v1.27.0-beta.0/json-events 4.38
18 TestDownloadOnly/v1.27.0-beta.0/preload-exists 0
22 TestDownloadOnly/v1.27.0-beta.0/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.62
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.36
25 TestDownloadOnlyKic 1.65
26 TestBinaryMirror 1.15
27 TestOffline 55.95
29 TestAddons/Setup 100.94
31 TestAddons/parallel/Registry 15.14
32 TestAddons/parallel/Ingress 34.24
33 TestAddons/parallel/MetricsServer 5.68
34 TestAddons/parallel/HelmTiller 11.79
36 TestAddons/parallel/CSI 40.21
37 TestAddons/parallel/Headlamp 10.14
38 TestAddons/parallel/CloudSpanner 5.36
41 TestAddons/serial/GCPAuth/Namespaces 0.13
42 TestAddons/StoppedEnableDisable 11.25
43 TestCertOptions 30.27
44 TestCertExpiration 276.13
45 TestDockerFlags 27.87
46 TestForceSystemdFlag 40.73
47 TestForceSystemdEnv 29.43
48 TestKVMDriverInstallOrUpdate 3.43
52 TestErrorSpam/setup 23.73
53 TestErrorSpam/start 1.15
54 TestErrorSpam/status 1.46
55 TestErrorSpam/pause 1.62
56 TestErrorSpam/unpause 1.61
57 TestErrorSpam/stop 2.47
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 40.78
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 72.86
64 TestFunctional/serial/KubeContext 0.05
65 TestFunctional/serial/KubectlGetPods 0.07
68 TestFunctional/serial/CacheCmd/cache/add_remote 5.47
69 TestFunctional/serial/CacheCmd/cache/add_local 1.48
70 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
71 TestFunctional/serial/CacheCmd/cache/list 0.05
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.49
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.49
74 TestFunctional/serial/CacheCmd/cache/delete 0.09
75 TestFunctional/serial/MinikubeKubectlCmd 0.11
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
77 TestFunctional/serial/ExtraConfig 43.39
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.17
80 TestFunctional/serial/LogsFileCmd 1.29
82 TestFunctional/parallel/ConfigCmd 0.35
83 TestFunctional/parallel/DashboardCmd 28.6
84 TestFunctional/parallel/DryRun 0.85
85 TestFunctional/parallel/InternationalLanguage 0.32
86 TestFunctional/parallel/StatusCmd 1.74
90 TestFunctional/parallel/ServiceCmdConnect 8.02
91 TestFunctional/parallel/AddonsCmd 0.21
92 TestFunctional/parallel/PersistentVolumeClaim 25.82
94 TestFunctional/parallel/SSHCmd 1.51
95 TestFunctional/parallel/CpCmd 2.26
96 TestFunctional/parallel/MySQL 23.55
97 TestFunctional/parallel/FileSync 0.51
98 TestFunctional/parallel/CertSync 3.21
102 TestFunctional/parallel/NodeLabels 0.08
104 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
106 TestFunctional/parallel/License 0.15
107 TestFunctional/parallel/UpdateContextCmd/no_changes 0.34
108 TestFunctional/parallel/DockerEnv/bash 2.12
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.33
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
111 TestFunctional/parallel/Version/short 0.07
112 TestFunctional/parallel/Version/components 1.07
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.4
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.39
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.4
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.42
117 TestFunctional/parallel/ImageCommands/ImageBuild 4.92
118 TestFunctional/parallel/ImageCommands/Setup 1.38
119 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.98
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.35
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.7
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.64
127 TestFunctional/parallel/ServiceCmd/List 0.63
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.69
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.8
130 TestFunctional/parallel/ServiceCmd/Format 0.71
131 TestFunctional/parallel/ServiceCmd/URL 0.69
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.69
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.33
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.68
142 TestFunctional/parallel/ProfileCmd/profile_list 0.57
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.68
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.61
145 TestFunctional/parallel/MountCmd/any-port 10.71
146 TestFunctional/parallel/MountCmd/specific-port 3.2
147 TestFunctional/delete_addon-resizer_images 0.18
148 TestFunctional/delete_my-image_image 0.07
149 TestFunctional/delete_minikube_cached_images 0.08
153 TestImageBuild/serial/NormalBuild 2.06
154 TestImageBuild/serial/BuildWithBuildArg 1.14
155 TestImageBuild/serial/BuildWithDockerIgnore 0.48
156 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.39
159 TestIngressAddonLegacy/StartLegacyK8sCluster 55.52
161 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.31
162 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.44
163 TestIngressAddonLegacy/serial/ValidateIngressAddons 38.09
166 TestJSONOutput/start/Command 41.32
167 TestJSONOutput/start/Audit 0
169 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Command 0.73
173 TestJSONOutput/pause/Audit 0
175 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Command 0.64
179 TestJSONOutput/unpause/Audit 0
181 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/stop/Command 5.94
185 TestJSONOutput/stop/Audit 0
187 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
189 TestErrorJSONOutput 0.46
191 TestKicCustomNetwork/create_custom_network 26.84
192 TestKicCustomNetwork/use_default_bridge_network 27.18
193 TestKicExistingNetwork 27.53
194 TestKicCustomSubnet 27.07
195 TestKicStaticIP 27.1
196 TestMainNoArgs 0.05
197 TestMinikubeProfile 56.99
200 TestMountStart/serial/StartWithMountFirst 8.34
201 TestMountStart/serial/VerifyMountFirst 0.46
202 TestMountStart/serial/StartWithMountSecond 7.88
203 TestMountStart/serial/VerifyMountSecond 0.47
204 TestMountStart/serial/DeleteFirst 2.16
205 TestMountStart/serial/VerifyMountPostDelete 0.46
206 TestMountStart/serial/Stop 1.4
207 TestMountStart/serial/RestartStopped 9.35
208 TestMountStart/serial/VerifyMountPostStop 0.46
211 TestMultiNode/serial/FreshStart2Nodes 71.28
212 TestMultiNode/serial/DeployApp2Nodes 43.03
213 TestMultiNode/serial/PingHostFrom2Pods 0.84
214 TestMultiNode/serial/AddNode 18.44
215 TestMultiNode/serial/ProfileList 0.51
216 TestMultiNode/serial/CopyFile 16.87
217 TestMultiNode/serial/StopNode 3.18
218 TestMultiNode/serial/StartAfterStop 13.11
219 TestMultiNode/serial/RestartKeepsNodes 95.81
220 TestMultiNode/serial/DeleteNode 6.21
221 TestMultiNode/serial/StopMultiNode 21.94
222 TestMultiNode/serial/RestartMultiNode 77.97
223 TestMultiNode/serial/ValidateNameConflict 27.69
228 TestPreload 114.44
230 TestScheduledStopUnix 97.99
231 TestSkaffold 57.06
233 TestInsufficientStorage 13.11
234 TestRunningBinaryUpgrade 70.59
236 TestKubernetesUpgrade 342.44
237 TestMissingContainerUpgrade 137.51
239 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
240 TestNoKubernetes/serial/StartWithK8s 38.24
241 TestNoKubernetes/serial/StartWithStopK8s 10.26
242 TestNoKubernetes/serial/Start 10.61
243 TestNoKubernetes/serial/VerifyK8sNotRunning 0.61
244 TestNoKubernetes/serial/ProfileList 2.08
245 TestNoKubernetes/serial/Stop 1.84
246 TestNoKubernetes/serial/StartNoArgs 9.56
247 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.55
248 TestStoppedBinaryUpgrade/Setup 0.37
249 TestStoppedBinaryUpgrade/Upgrade 67.58
250 TestStoppedBinaryUpgrade/MinikubeLogs 1.53
259 TestPause/serial/Start 41.3
273 TestStartStop/group/old-k8s-version/serial/FirstStart 109.05
275 TestStartStop/group/no-preload/serial/FirstStart 50.56
277 TestStartStop/group/embed-certs/serial/FirstStart 45.87
278 TestStartStop/group/no-preload/serial/DeployApp 8.37
279 TestStartStop/group/embed-certs/serial/DeployApp 9.4
280 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
281 TestStartStop/group/no-preload/serial/Stop 11.13
282 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.72
283 TestStartStop/group/embed-certs/serial/Stop 10.94
284 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
285 TestStartStop/group/no-preload/serial/SecondStart 563.31
286 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.3
287 TestStartStop/group/embed-certs/serial/SecondStart 322.1
288 TestStartStop/group/old-k8s-version/serial/DeployApp 8.44
289 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.71
291 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 76.86
292 TestStartStop/group/old-k8s-version/serial/Stop 11.04
293 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
294 TestStartStop/group/old-k8s-version/serial/SecondStart 62.27
295 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 13.02
296 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.39
297 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.7
299 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.99
300 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.49
301 TestStartStop/group/old-k8s-version/serial/Pause 3.54
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
303 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 561.73
305 TestStartStop/group/newest-cni/serial/FirstStart 40.12
306 TestStartStop/group/newest-cni/serial/DeployApp 0
307 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
308 TestStartStop/group/newest-cni/serial/Stop 11
309 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
310 TestStartStop/group/newest-cni/serial/SecondStart 29.38
311 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
312 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
313 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.5
314 TestStartStop/group/newest-cni/serial/Pause 3.55
315 TestNetworkPlugins/group/auto/Start 46.89
316 TestNetworkPlugins/group/auto/KubeletFlags 0.48
317 TestNetworkPlugins/group/auto/NetCatPod 9.25
318 TestNetworkPlugins/group/auto/DNS 0.16
319 TestNetworkPlugins/group/auto/Localhost 0.13
320 TestNetworkPlugins/group/auto/HairPin 0.14
321 TestNetworkPlugins/group/kindnet/Start 55.39
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
324 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.51
325 TestStartStop/group/embed-certs/serial/Pause 3.92
326 TestNetworkPlugins/group/calico/Start 71.02
327 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
328 TestNetworkPlugins/group/kindnet/KubeletFlags 0.5
329 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
330 TestNetworkPlugins/group/kindnet/DNS 0.16
331 TestNetworkPlugins/group/kindnet/Localhost 0.14
332 TestNetworkPlugins/group/kindnet/HairPin 0.15
333 TestNetworkPlugins/group/custom-flannel/Start 55.12
334 TestNetworkPlugins/group/calico/ControllerPod 5.02
335 TestNetworkPlugins/group/calico/KubeletFlags 0.57
336 TestNetworkPlugins/group/calico/NetCatPod 10.31
337 TestNetworkPlugins/group/calico/DNS 0.16
338 TestNetworkPlugins/group/calico/Localhost 0.17
339 TestNetworkPlugins/group/calico/HairPin 0.16
340 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.53
341 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.24
342 TestNetworkPlugins/group/false/Start 44.79
343 TestNetworkPlugins/group/custom-flannel/DNS 0.17
344 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
345 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
346 TestNetworkPlugins/group/enable-default-cni/Start 48.33
347 TestNetworkPlugins/group/false/KubeletFlags 0.53
348 TestNetworkPlugins/group/false/NetCatPod 9.3
349 TestNetworkPlugins/group/false/DNS 0.16
350 TestNetworkPlugins/group/false/Localhost 0.15
351 TestNetworkPlugins/group/false/HairPin 0.16
352 TestNetworkPlugins/group/flannel/Start 56.73
353 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.52
354 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.23
355 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
356 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
357 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
358 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
359 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
360 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.55
361 TestStartStop/group/no-preload/serial/Pause 3.97
362 TestNetworkPlugins/group/bridge/Start 53.66
363 TestNetworkPlugins/group/kubenet/Start 41.54
364 TestNetworkPlugins/group/flannel/ControllerPod 5.02
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.57
366 TestNetworkPlugins/group/flannel/NetCatPod 9.3
367 TestNetworkPlugins/group/flannel/DNS 0.17
368 TestNetworkPlugins/group/flannel/Localhost 0.15
369 TestNetworkPlugins/group/flannel/HairPin 0.15
370 TestNetworkPlugins/group/kubenet/KubeletFlags 0.54
371 TestNetworkPlugins/group/kubenet/NetCatPod 10.22
372 TestNetworkPlugins/group/bridge/KubeletFlags 0.54
373 TestNetworkPlugins/group/bridge/NetCatPod 10.28
374 TestNetworkPlugins/group/kubenet/DNS 0.16
375 TestNetworkPlugins/group/kubenet/Localhost 0.18
376 TestNetworkPlugins/group/kubenet/HairPin 0.15
377 TestNetworkPlugins/group/bridge/DNS 0.15
378 TestNetworkPlugins/group/bridge/Localhost 0.13
379 TestNetworkPlugins/group/bridge/HairPin 0.16
380 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
381 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
382 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.49
383 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.48
x
+
TestDownloadOnly/v1.16.0/json-events (9.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-383345 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-383345 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (9.110673008s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-383345
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-383345: exit status 85 (59.75961ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-383345 | jenkins | v1.29.0 | 23 Mar 23 22:55 UTC |          |
	|         | -p download-only-383345        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/23 22:55:48
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0323 22:55:48.408604   68714 out.go:296] Setting OutFile to fd 1 ...
	I0323 22:55:48.408754   68714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 22:55:48.408764   68714 out.go:309] Setting ErrFile to fd 2...
	I0323 22:55:48.408770   68714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 22:55:48.408889   68714 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16143-62012/.minikube/bin
	W0323 22:55:48.409047   68714 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16143-62012/.minikube/config/config.json: open /home/jenkins/minikube-integration/16143-62012/.minikube/config/config.json: no such file or directory
	I0323 22:55:48.409699   68714 out.go:303] Setting JSON to true
	I0323 22:55:48.410622   68714 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5895,"bootTime":1679606254,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0323 22:55:48.410680   68714 start.go:135] virtualization: kvm guest
	I0323 22:55:48.413462   68714 out.go:97] [download-only-383345] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0323 22:55:48.415067   68714 out.go:169] MINIKUBE_LOCATION=16143
	W0323 22:55:48.413602   68714 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball: no such file or directory
	I0323 22:55:48.413646   68714 notify.go:220] Checking for updates...
	I0323 22:55:48.418038   68714 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0323 22:55:48.419597   68714 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	I0323 22:55:48.421125   68714 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	I0323 22:55:48.422620   68714 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0323 22:55:48.425334   68714 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0323 22:55:48.425660   68714 driver.go:365] Setting default libvirt URI to qemu:///system
	I0323 22:55:48.493039   68714 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0323 22:55:48.493163   68714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0323 22:55:48.611194   68714 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:41 SystemTime:2023-03-23 22:55:48.60203143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0323 22:55:48.611314   68714 docker.go:294] overlay module found
	I0323 22:55:48.613522   68714 out.go:97] Using the docker driver based on user configuration
	I0323 22:55:48.613548   68714 start.go:295] selected driver: docker
	I0323 22:55:48.613555   68714 start.go:856] validating driver "docker" against <nil>
	I0323 22:55:48.613789   68714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0323 22:55:48.731699   68714 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:41 SystemTime:2023-03-23 22:55:48.723342168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0323 22:55:48.731827   68714 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0323 22:55:48.732387   68714 start_flags.go:386] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I0323 22:55:48.732555   68714 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0323 22:55:48.734651   68714 out.go:169] Using Docker driver with root privileges
	I0323 22:55:48.735921   68714 cni.go:84] Creating CNI manager for ""
	I0323 22:55:48.735944   68714 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0323 22:55:48.735956   68714 start_flags.go:319] config:
	{Name:download-only-383345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-383345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0323 22:55:48.737484   68714 out.go:97] Starting control plane node download-only-383345 in cluster download-only-383345
	I0323 22:55:48.737537   68714 cache.go:120] Beginning downloading kic base image for docker with docker
	I0323 22:55:48.738782   68714 out.go:97] Pulling base image ...
	I0323 22:55:48.738829   68714 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0323 22:55:48.738933   68714 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0323 22:55:48.766710   68714 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0323 22:55:48.766741   68714 cache.go:57] Caching tarball of preloaded images
	I0323 22:55:48.766952   68714 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0323 22:55:48.768821   68714 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0323 22:55:48.768849   68714 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0323 22:55:48.801834   68714 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0323 22:55:48.803116   68714 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 to local cache
	I0323 22:55:48.803284   68714 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local cache directory
	I0323 22:55:48.803353   68714 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 to local cache
	I0323 22:55:51.085327   68714 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0323 22:55:51.085447   68714 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0323 22:55:51.810533   68714 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0323 22:55:51.810919   68714 profile.go:148] Saving config to /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/download-only-383345/config.json ...
	I0323 22:55:51.810955   68714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/download-only-383345/config.json: {Name:mk7e75379bcd8cd6d24e9b7107ee4669c3434905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0323 22:55:51.811135   68714 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0323 22:55:51.811398   68714 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/16143-62012/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-383345"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/json-events (4.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-383345 --force --alsologtostderr --kubernetes-version=v1.26.3 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-383345 --force --alsologtostderr --kubernetes-version=v1.26.3 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.42495577s)
--- PASS: TestDownloadOnly/v1.26.3/json-events (4.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/preload-exists
--- PASS: TestDownloadOnly/v1.26.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-383345
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-383345: exit status 85 (60.879039ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-383345 | jenkins | v1.29.0 | 23 Mar 23 22:55 UTC |          |
	|         | -p download-only-383345        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-383345 | jenkins | v1.29.0 | 23 Mar 23 22:55 UTC |          |
	|         | -p download-only-383345        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/23 22:55:57
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0323 22:55:57.579982   68956 out.go:296] Setting OutFile to fd 1 ...
	I0323 22:55:57.580111   68956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 22:55:57.580121   68956 out.go:309] Setting ErrFile to fd 2...
	I0323 22:55:57.580129   68956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 22:55:57.580242   68956 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16143-62012/.minikube/bin
	W0323 22:55:57.580373   68956 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16143-62012/.minikube/config/config.json: open /home/jenkins/minikube-integration/16143-62012/.minikube/config/config.json: no such file or directory
	I0323 22:55:57.580780   68956 out.go:303] Setting JSON to true
	I0323 22:55:57.581703   68956 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5904,"bootTime":1679606254,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0323 22:55:57.581769   68956 start.go:135] virtualization: kvm guest
	I0323 22:55:57.584637   68956 out.go:97] [download-only-383345] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0323 22:55:57.586613   68956 out.go:169] MINIKUBE_LOCATION=16143
	I0323 22:55:57.584851   68956 notify.go:220] Checking for updates...
	I0323 22:55:57.590016   68956 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0323 22:55:57.591727   68956 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	I0323 22:55:57.593562   68956 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	I0323 22:55:57.595132   68956 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-383345"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-beta.0/json-events (4.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-beta.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-383345 --force --alsologtostderr --kubernetes-version=v1.27.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-383345 --force --alsologtostderr --kubernetes-version=v1.27.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.379222057s)
--- PASS: TestDownloadOnly/v1.27.0-beta.0/json-events (4.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.27.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-beta.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-383345
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-383345: exit status 85 (57.336418ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only             | download-only-383345 | jenkins | v1.29.0 | 23 Mar 23 22:55 UTC |          |
	|         | -p download-only-383345             |                      |         |         |                     |          |
	|         | --force --alsologtostderr           |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0        |                      |         |         |                     |          |
	|         | --container-runtime=docker          |                      |         |         |                     |          |
	|         | --driver=docker                     |                      |         |         |                     |          |
	|         | --container-runtime=docker          |                      |         |         |                     |          |
	| start   | -o=json --download-only             | download-only-383345 | jenkins | v1.29.0 | 23 Mar 23 22:55 UTC |          |
	|         | -p download-only-383345             |                      |         |         |                     |          |
	|         | --force --alsologtostderr           |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.3        |                      |         |         |                     |          |
	|         | --container-runtime=docker          |                      |         |         |                     |          |
	|         | --driver=docker                     |                      |         |         |                     |          |
	|         | --container-runtime=docker          |                      |         |         |                     |          |
	| start   | -o=json --download-only             | download-only-383345 | jenkins | v1.29.0 | 23 Mar 23 22:56 UTC |          |
	|         | -p download-only-383345             |                      |         |         |                     |          |
	|         | --force --alsologtostderr           |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.0-beta.0 |                      |         |         |                     |          |
	|         | --container-runtime=docker          |                      |         |         |                     |          |
	|         | --driver=docker                     |                      |         |         |                     |          |
	|         | --container-runtime=docker          |                      |         |         |                     |          |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/23 22:56:02
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0323 22:56:02.069245   69207 out.go:296] Setting OutFile to fd 1 ...
	I0323 22:56:02.069367   69207 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 22:56:02.069376   69207 out.go:309] Setting ErrFile to fd 2...
	I0323 22:56:02.069380   69207 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 22:56:02.069518   69207 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16143-62012/.minikube/bin
	W0323 22:56:02.069640   69207 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16143-62012/.minikube/config/config.json: open /home/jenkins/minikube-integration/16143-62012/.minikube/config/config.json: no such file or directory
	I0323 22:56:02.070051   69207 out.go:303] Setting JSON to true
	I0323 22:56:02.070913   69207 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5908,"bootTime":1679606254,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0323 22:56:02.070967   69207 start.go:135] virtualization: kvm guest
	I0323 22:56:02.073512   69207 out.go:97] [download-only-383345] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0323 22:56:02.075330   69207 out.go:169] MINIKUBE_LOCATION=16143
	I0323 22:56:02.073728   69207 notify.go:220] Checking for updates...
	I0323 22:56:02.078413   69207 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0323 22:56:02.079952   69207 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	I0323 22:56:02.081745   69207 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	I0323 22:56:02.083595   69207 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-383345"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.62s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.62s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-383345
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-018617 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-018617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-018617
--- PASS: TestDownloadOnlyKic (1.65s)

                                                
                                    
x
+
TestBinaryMirror (1.15s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-627002 --alsologtostderr --binary-mirror http://127.0.0.1:39693 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-627002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-627002
--- PASS: TestBinaryMirror (1.15s)

                                                
                                    
x
+
TestOffline (55.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-450117 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-450117 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (52.169344589s)
helpers_test.go:175: Cleaning up "offline-docker-450117" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-450117
E0323 23:22:51.550515   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-450117: (3.777019313s)
--- PASS: TestOffline (55.95s)

                                                
                                    
x
+
TestAddons/Setup (100.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-213626 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-213626 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m40.935854963s)
--- PASS: TestAddons/Setup (100.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 12.704427ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-27kpf" [3bf0e60d-fbb5-4bd6-b9df-8f7c60a4cc23] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007956445s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qmch4" [5921e79d-c4a4-4cf9-ad4d-a381e0d75751] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00693174s
addons_test.go:305: (dbg) Run:  kubectl --context addons-213626 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-213626 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-213626 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.936505441s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-213626 ip
2023/03/23 22:58:05 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-213626 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.14s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (34.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-213626 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context addons-213626 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.83048062s)
addons_test.go:197: (dbg) Run:  kubectl --context addons-213626 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-213626 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [928fae39-e0ca-471e-81b7-9b033313874c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [928fae39-e0ca-471e-81b7-9b033313874c] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.009077261s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-213626 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-213626 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-213626 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-213626 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p addons-213626 addons disable ingress-dns --alsologtostderr -v=1: (1.5182033s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-213626 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-213626 addons disable ingress --alsologtostderr -v=1: (7.796872015s)
--- PASS: TestAddons/parallel/Ingress (34.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.184733ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-272nn" [993d79c5-04a0-4433-91af-d3570125aae3] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009572641s
addons_test.go:380: (dbg) Run:  kubectl --context addons-213626 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-213626 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.68s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 1.888664ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-tb9kw" [60248a15-f851-49e8-a4d0-9c0d1b3d4e51] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.06402023s
addons_test.go:438: (dbg) Run:  kubectl --context addons-213626 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-213626 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.103695704s)
addons_test.go:443: kubectl --context addons-213626 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-213626 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 5.857327ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-213626 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-213626 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-213626 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-213626 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-213626 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-213626 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-213626 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [19681ea1-0463-40fe-bbfa-3c4ad45e9539] Pending
helpers_test.go:344: "task-pv-pod" [19681ea1-0463-40fe-bbfa-3c4ad45e9539] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [19681ea1-0463-40fe-bbfa-3c4ad45e9539] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.007240449s
addons_test.go:549: (dbg) Run:  kubectl --context addons-213626 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-213626 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-213626 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-213626 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-213626 delete pod task-pv-pod: (1.316743843s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-213626 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-213626 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-213626 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-213626 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-213626 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-213626 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [942d25df-cb7c-4f59-ac0b-59bad91d502b] Pending
helpers_test.go:344: "task-pv-pod-restore" [942d25df-cb7c-4f59-ac0b-59bad91d502b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [942d25df-cb7c-4f59-ac0b-59bad91d502b] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00592465s
addons_test.go:591: (dbg) Run:  kubectl --context addons-213626 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-213626 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-213626 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-213626 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-213626 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.49152275s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-213626 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.21s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-213626 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-213626 --alsologtostderr -v=1: (1.127630027s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58c48fc87f-w97dw" [94d4cad9-0a1c-49ee-95a4-6fd6cee91b4e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58c48fc87f-w97dw" [94d4cad9-0a1c-49ee-95a4-6fd6cee91b4e] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.007334793s
--- PASS: TestAddons/parallel/Headlamp (10.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-58d646969f-tg5f6" [58679e22-1da2-405c-b13e-8c6c6c70e74d] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005913633s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-213626
--- PASS: TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-213626 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-213626 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-213626
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-213626: (11.024354601s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-213626
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-213626
--- PASS: TestAddons/StoppedEnableDisable (11.25s)

                                                
                                    
x
+
TestCertOptions (30.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-911082 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-911082 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (26.306081437s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-911082 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-911082 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-911082 -- "sudo cat /etc/kubernetes/admin.conf"
E0323 23:26:33.109608   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
E0323 23:26:33.114887   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
E0323 23:26:33.125166   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
E0323 23:26:33.145472   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
E0323 23:26:33.186812   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
E0323 23:26:33.267112   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
E0323 23:26:33.427505   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "cert-options-911082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-911082
E0323 23:26:33.747982   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
E0323 23:26:34.389134   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-911082: (2.740281771s)
--- PASS: TestCertOptions (30.27s)

                                                
                                    
x
+
TestCertExpiration (276.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-574094 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-574094 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (1m0.200960386s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-574094 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-574094 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (32.841628973s)
helpers_test.go:175: Cleaning up "cert-expiration-574094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-574094
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-574094: (3.085010145s)
--- PASS: TestCertExpiration (276.13s)

                                                
                                    
x
+
TestDockerFlags (27.87s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-732774 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-732774 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.027268702s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-732774 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-732774 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-732774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-732774
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-732774: (2.776333529s)
--- PASS: TestDockerFlags (27.87s)

                                                
                                    
x
+
TestForceSystemdFlag (40.73s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-569448 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-569448 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.111824742s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-569448 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-569448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-569448
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-569448: (2.99133735s)
--- PASS: TestForceSystemdFlag (40.73s)

                                                
                                    
x
+
TestForceSystemdEnv (29.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-286741 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0323 23:26:38.777702   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-286741 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.204981708s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-286741 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-286741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-286741
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-286741: (3.626042122s)
--- PASS: TestForceSystemdEnv (29.43s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.43s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0323 23:26:35.670316   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (3.43s)

                                                
                                    
x
+
TestErrorSpam/setup (23.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-213889 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-213889 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-213889 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-213889 --driver=docker  --container-runtime=docker: (23.73251854s)
--- PASS: TestErrorSpam/setup (23.73s)

                                                
                                    
x
+
TestErrorSpam/start (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 start --dry-run
--- PASS: TestErrorSpam/start (1.15s)

                                                
                                    
x
+
TestErrorSpam/status (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 status
--- PASS: TestErrorSpam/status (1.46s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (2.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 stop: (2.105913014s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-213889 --log_dir /tmp/nospam-213889 stop
--- PASS: TestErrorSpam/stop (2.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/test/nested/copy/68702/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-linux-amd64 start -p functional-378114 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2229: (dbg) Done: out/minikube-linux-amd64 start -p functional-378114 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (40.779839886s)
--- PASS: TestFunctional/serial/StartWithProxy (40.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (72.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-378114 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-378114 --alsologtostderr -v=8: (1m12.856406275s)
functional_test.go:658: soft start took 1m12.857115553s for "functional-378114" cluster.
--- PASS: TestFunctional/serial/SoftStart (72.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-378114 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 cache add k8s.gcr.io/pause:3.1: (1.946455433s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 cache add k8s.gcr.io/pause:3.3: (1.919533364s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 cache add k8s.gcr.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 cache add k8s.gcr.io/pause:latest: (1.600175187s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-378114 /tmp/TestFunctionalserialCacheCmdcacheadd_local235612063/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 cache add minikube-local-cache-test:functional-378114
functional_test.go:1084: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 cache add minikube-local-cache-test:functional-378114: (1.097347233s)
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 cache delete minikube-local-cache-test:functional-378114
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-378114
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-378114 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (475.135199ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 cache reload: (1.037479238s)
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 kubectl -- --context functional-378114 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-378114 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-378114 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-378114 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.391787146s)
functional_test.go:756: restart took 43.391984495s for "functional-378114" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.39s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-378114 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 logs: (1.173315785s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 logs --file /tmp/TestFunctionalserialLogsFileCmd731747048/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 logs --file /tmp/TestFunctionalserialLogsFileCmd731747048/001/logs.txt: (1.292188468s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-378114 config get cpus: exit status 14 (53.472202ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-378114 config get cpus: exit status 14 (59.03669ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-378114 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-378114 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 135513: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.60s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-378114 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-378114 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (380.999852ms)

                                                
                                                
-- stdout --
	* [functional-378114] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0323 23:03:01.956397  133871 out.go:296] Setting OutFile to fd 1 ...
	I0323 23:03:01.956633  133871 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:03:01.956644  133871 out.go:309] Setting ErrFile to fd 2...
	I0323 23:03:01.956652  133871 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:03:01.956785  133871 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16143-62012/.minikube/bin
	I0323 23:03:01.957609  133871 out.go:303] Setting JSON to false
	I0323 23:03:01.959305  133871 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6328,"bootTime":1679606254,"procs":585,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0323 23:03:01.959457  133871 start.go:135] virtualization: kvm guest
	I0323 23:03:01.962997  133871 out.go:177] * [functional-378114] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0323 23:03:01.964849  133871 out.go:177]   - MINIKUBE_LOCATION=16143
	I0323 23:03:01.966392  133871 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0323 23:03:01.964795  133871 notify.go:220] Checking for updates...
	I0323 23:03:01.970399  133871 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	I0323 23:03:01.972214  133871 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	I0323 23:03:01.985458  133871 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0323 23:03:01.987974  133871 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0323 23:03:01.991033  133871 config.go:182] Loaded profile config "functional-378114": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:03:01.991695  133871 driver.go:365] Setting default libvirt URI to qemu:///system
	I0323 23:03:02.081904  133871 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0323 23:03:02.082032  133871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0323 23:03:02.238437  133871 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:38 SystemTime:2023-03-23 23:03:02.228498615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0323 23:03:02.238565  133871 docker.go:294] overlay module found
	I0323 23:03:02.277618  133871 out.go:177] * Using the docker driver based on existing profile
	I0323 23:03:02.279474  133871 start.go:295] selected driver: docker
	I0323 23:03:02.279503  133871 start.go:856] validating driver "docker" against &{Name:functional-378114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:functional-378114 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0323 23:03:02.279692  133871 start.go:867] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0323 23:03:02.282600  133871 out.go:177] 
	W0323 23:03:02.284490  133871 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0323 23:03:02.285985  133871 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-378114 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-378114 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-378114 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (318.353896ms)

                                                
                                                
-- stdout --
	* [functional-378114] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0323 23:03:02.811088  134555 out.go:296] Setting OutFile to fd 1 ...
	I0323 23:03:02.811274  134555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:03:02.811286  134555 out.go:309] Setting ErrFile to fd 2...
	I0323 23:03:02.811294  134555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:03:02.811479  134555 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16143-62012/.minikube/bin
	I0323 23:03:02.812100  134555 out.go:303] Setting JSON to false
	I0323 23:03:02.813525  134555 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6329,"bootTime":1679606254,"procs":583,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0323 23:03:02.813623  134555 start.go:135] virtualization: kvm guest
	I0323 23:03:02.816955  134555 out.go:177] * [functional-378114] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	I0323 23:03:02.818956  134555 notify.go:220] Checking for updates...
	I0323 23:03:02.820776  134555 out.go:177]   - MINIKUBE_LOCATION=16143
	I0323 23:03:02.822624  134555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0323 23:03:02.824531  134555 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	I0323 23:03:02.826120  134555 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	I0323 23:03:02.827700  134555 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0323 23:03:02.829133  134555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0323 23:03:02.830932  134555 config.go:182] Loaded profile config "functional-378114": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:03:02.831339  134555 driver.go:365] Setting default libvirt URI to qemu:///system
	I0323 23:03:02.918876  134555 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0323 23:03:02.918978  134555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0323 23:03:03.063085  134555 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:38 SystemTime:2023-03-23 23:03:03.054100726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0323 23:03:03.063190  134555 docker.go:294] overlay module found
	I0323 23:03:03.065763  134555 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0323 23:03:03.067222  134555 start.go:295] selected driver: docker
	I0323 23:03:03.067240  134555 start.go:856] validating driver "docker" against &{Name:functional-378114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:functional-378114 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0323 23:03:03.067343  134555 start.go:867] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0323 23:03:03.069664  134555 out.go:177] 
	W0323 23:03:03.071297  134555 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0323 23:03:03.072878  134555 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-378114 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-378114 expose deployment hello-node-connect --type=NodePort --port=8080
E0323 23:02:54.110454   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-hg2f9" [a965888c-9f2b-4fee-aa2e-d95254018c01] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-hg2f9" [a965888c-9f2b-4fee-aa2e-d95254018c01] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.008957068s
functional_test.go:1647: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.49.2:31859
functional_test.go:1673: http://192.168.49.2:31859: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-hg2f9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31859
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [dcf088ff-c624-45fd-9c37-0e526ba3e2d4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010373707s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-378114 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-378114 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-378114 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-378114 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ca76ebaa-507b-43cd-a84c-fb4f8fd79cb4] Pending
helpers_test.go:344: "sp-pod" [ca76ebaa-507b-43cd-a84c-fb4f8fd79cb4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ca76ebaa-507b-43cd-a84c-fb4f8fd79cb4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.007395313s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-378114 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-378114 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-378114 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [adfef562-f158-4e5f-8d33-3ed637b6d8a6] Pending
helpers_test.go:344: "sp-pod" [adfef562-f158-4e5f-8d33-3ed637b6d8a6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [adfef562-f158-4e5f-8d33-3ed637b6d8a6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.045378334s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-378114 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh -n functional-378114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 cp functional-378114:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1458885499/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh -n functional-378114 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-378114 replace --force -f testdata/mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-g8bv9" [5a18316e-1a83-40e6-bad5-e71f1d0193b5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-g8bv9" [5a18316e-1a83-40e6-bad5-e71f1d0193b5] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.00928623s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-378114 exec mysql-888f84dd9-g8bv9 -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-378114 exec mysql-888f84dd9-g8bv9 -- mysql -ppassword -e "show databases;": exit status 1 (154.594915ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-378114 exec mysql-888f84dd9-g8bv9 -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-378114 exec mysql-888f84dd9-g8bv9 -- mysql -ppassword -e "show databases;": exit status 1 (139.700238ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-378114 exec mysql-888f84dd9-g8bv9 -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-378114 exec mysql-888f84dd9-g8bv9 -- mysql -ppassword -e "show databases;": exit status 1 (141.059734ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-378114 exec mysql-888f84dd9-g8bv9 -- mysql -ppassword -e "show databases;"
2023/03/23 23:03:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (23.55s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/68702/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "sudo cat /etc/test/nested/copy/68702/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/68702.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "sudo cat /etc/ssl/certs/68702.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/68702.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "sudo cat /usr/share/ca-certificates/68702.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/687022.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "sudo cat /etc/ssl/certs/687022.pem"
E0323 23:03:01.792255   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/687022.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "sudo cat /usr/share/ca-certificates/687022.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-378114 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-378114 ssh "sudo systemctl is-active crio": exit status 1 (575.481506ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-378114 docker-env) && out/minikube-linux-amd64 status -p functional-378114"
functional_test.go:494: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-378114 docker-env) && out/minikube-linux-amd64 status -p functional-378114": (1.325791469s)
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-378114 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 version -o=json --components
functional_test.go:2265: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 version -o=json --components: (1.067036422s)
--- PASS: TestFunctional/parallel/Version/components (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image ls --format short
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-378114 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-378114
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-378114
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image ls --format table
E0323 23:03:12.033085   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-378114 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-378114 | 250da2c13b1df | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.26.3           | ce8c2293ef09c | 123MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/google-containers/addon-resizer      | functional-378114 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.26.3           | 1d9b3cbae03ce | 134MB  |
| registry.k8s.io/kube-proxy                  | v1.26.3           | 92ed2bec97a63 | 65.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-scheduler              | v1.26.3           | 5a79047369329 | 56.4MB |
| docker.io/library/nginx                     | alpine            | 2bc7edbc3cf2f | 40.7MB |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/nginx                     | latest            | ac232364af842 | 142MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image ls --format json
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-378114 image ls --format json:
[{"id":"1d9b3cbae03cea2a1766cfa5bf06a5a9c7a7bdbc6f5322756e29ac78e76f2708","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.3"],"size":"134000000"},{"id":"92ed2bec97a637010666d6c4aa4d69b672baec0fd5d236d142e4227a3a0557d8","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.3"],"size":"65599999"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"250da2c13b1dfefa7f2836dd24c88c86b74b7fba8ba477ea7a9313f7279005d2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-378114"],"size":"30"},{"id":"ac232364af842735579e922641ae2f67d5b8ea97df33a207c5ea
05f60c63a92d","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"ce8c2293ef09c9987773345638026f9f7aed16bc52e7a6ea507f0c655ab17161","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.3"],"size":"123000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"5a79047369329dff4a02e705e650664d2019e583b802416447a6a17e9debb62d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.3"],"size":"56
400000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-378114"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image ls --format yaml
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-378114 image ls --format yaml:
- id: 1d9b3cbae03cea2a1766cfa5bf06a5a9c7a7bdbc6f5322756e29ac78e76f2708
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.3
size: "134000000"
- id: 2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-378114
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: ac232364af842735579e922641ae2f67d5b8ea97df33a207c5ea05f60c63a92d
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 5a79047369329dff4a02e705e650664d2019e583b802416447a6a17e9debb62d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.3
size: "56400000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 250da2c13b1dfefa7f2836dd24c88c86b74b7fba8ba477ea7a9313f7279005d2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-378114
size: "30"
- id: ce8c2293ef09c9987773345638026f9f7aed16bc52e7a6ea507f0c655ab17161
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.3
size: "123000000"
- id: 92ed2bec97a637010666d6c4aa4d69b672baec0fd5d236d142e4227a3a0557d8
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.3
size: "65599999"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-378114 ssh pgrep buildkitd: exit status 1 (588.753267ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image build -t localhost/my-image:functional-378114 testdata/build
functional_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 image build -t localhost/my-image:functional-378114 testdata/build: (3.980251047s)
functional_test.go:318: (dbg) Stdout: out/minikube-linux-amd64 -p functional-378114 image build -t localhost/my-image:functional-378114 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 0d064b226550
Removing intermediate container 0d064b226550
---> fd2c9ca2a566
Step 3/3 : ADD content.txt /
---> 5e2f24889688
Successfully built 5e2f24889688
Successfully tagged localhost/my-image:functional-378114
functional_test.go:321: (dbg) Stderr: out/minikube-linux-amd64 -p functional-378114 image build -t localhost/my-image:functional-378114 testdata/build:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.295142436s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-378114
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-378114 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-378114 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-2rqd7" [7887a6ae-0605-477f-af7d-13c291fe8301] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6fddd6858d-2rqd7" [7887a6ae-0605-477f-af7d-13c291fe8301] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.013934141s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image load --daemon gcr.io/google-containers/addon-resizer:functional-378114
functional_test.go:353: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 image load --daemon gcr.io/google-containers/addon-resizer:functional-378114: (3.690255185s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-378114 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-378114 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ac7a8784-f1c9-4dff-8cfb-ee861fb3a06e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ac7a8784-f1c9-4dff-8cfb-ee861fb3a06e] Running
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.027793048s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image load --daemon gcr.io/google-containers/addon-resizer:functional-378114
functional_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 image load --daemon gcr.io/google-containers/addon-resizer:functional-378114: (2.388422605s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.154935394s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-378114
functional_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image load --daemon gcr.io/google-containers/addon-resizer:functional-378114
functional_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 image load --daemon gcr.io/google-containers/addon-resizer:functional-378114: (3.998239785s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image ls
E0323 23:02:52.829928   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 service list -o json
functional_test.go:1492: Took "689.063975ms" to run "out/minikube-linux-amd64 -p functional-378114 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 service --namespace=default --https --url hello-node
E0323 23:02:51.550429   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
E0323 23:02:51.556289   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
E0323 23:02:51.566723   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
E0323 23:02:51.587062   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
E0323 23:02:51.627331   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
E0323 23:02:51.707739   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
E0323 23:02:51.868780   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
E0323 23:02:52.189236   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
functional_test.go:1520: found endpoint: https://192.168.49.2:32032
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.49.2:32032
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image save gcr.io/google-containers/addon-resizer:functional-378114 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
functional_test.go:378: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 image save gcr.io/google-containers/addon-resizer:functional-378114 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.687525175s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-378114 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.110.175.53 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-378114 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image rm gcr.io/google-containers/addon-resizer:functional-378114
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.014342894s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image ls
E0323 23:02:56.671099   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1313: Took "527.082153ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1327: Took "46.358659ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-378114
functional_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 image save --daemon gcr.io/google-containers/addon-resizer:functional-378114
functional_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p functional-378114 image save --daemon gcr.io/google-containers/addon-resizer:functional-378114: (2.532230947s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-378114
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1364: Took "529.719422ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1377: Took "79.316622ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-378114 /tmp/TestFunctionalparallelMountCmdany-port3799409327/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1679612577567692624" to /tmp/TestFunctionalparallelMountCmdany-port3799409327/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1679612577567692624" to /tmp/TestFunctionalparallelMountCmdany-port3799409327/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1679612577567692624" to /tmp/TestFunctionalparallelMountCmdany-port3799409327/001/test-1679612577567692624
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-378114 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (547.153394ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 23 23:02 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 23 23:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 23 23:02 test-1679612577567692624
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh cat /mount-9p/test-1679612577567692624
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-378114 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b998e390-dc21-4f88-a1d0-334393819df5] Pending
helpers_test.go:344: "busybox-mount" [b998e390-dc21-4f88-a1d0-334393819df5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b998e390-dc21-4f88-a1d0-334393819df5] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b998e390-dc21-4f88-a1d0-334393819df5] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.010432145s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-378114 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-378114 /tmp/TestFunctionalparallelMountCmdany-port3799409327/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-378114 /tmp/TestFunctionalparallelMountCmdspecific-port3252035808/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-378114 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (637.103421ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-378114 /tmp/TestFunctionalparallelMountCmdspecific-port3252035808/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-378114 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-378114 ssh "sudo umount -f /mount-9p": exit status 1 (577.608582ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-378114 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-378114 /tmp/TestFunctionalparallelMountCmdspecific-port3252035808/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.20s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-378114
--- PASS: TestFunctional/delete_addon-resizer_images (0.18s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-378114
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-378114
--- PASS: TestFunctional/delete_minikube_cached_images (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-535992
image_test.go:73: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-535992: (2.063283109s)
--- PASS: TestImageBuild/serial/NormalBuild (2.06s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-535992
image_test.go:94: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-535992: (1.136256094s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.14s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-535992
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.39s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-535992
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (55.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-644273 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0323 23:04:13.474642   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-644273 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (55.520057479s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (55.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.31s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-644273 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-644273 addons enable ingress --alsologtostderr -v=5: (11.306687719s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.31s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-644273 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (38.09s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-644273 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-644273 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.217133039s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-644273 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-644273 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0a099754-9929-4966-8994-8e701e10428d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0323 23:05:35.395188   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
helpers_test.go:344: "nginx" [0a099754-9929-4966-8994-8e701e10428d] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.006491071s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-644273 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-644273 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-644273 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-644273 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-644273 addons disable ingress-dns --alsologtostderr -v=1: (4.084034128s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-644273 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-644273 addons disable ingress --alsologtostderr -v=1: (7.322901867s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.32s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-759575 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-759575 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (41.317669377s)
--- PASS: TestJSONOutput/start/Command (41.32s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-759575 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-759575 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-759575 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-759575 --output=json --user=testUser: (5.942473325s)
--- PASS: TestJSONOutput/stop/Command (5.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.46s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-410129 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-410129 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.370522ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ce5e93cc-3705-4cc4-8f24-d25698c928c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-410129] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9790022-7ad1-4861-8d95-75aeff31857f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16143"}}
	{"specversion":"1.0","id":"7063f962-8d1b-4427-9e4b-3411ec078c5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f7e12397-da70-4fdb-9681-4d84878a9f15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig"}}
	{"specversion":"1.0","id":"50e2db6b-7e41-4171-be1b-aecd3fd1cdaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube"}}
	{"specversion":"1.0","id":"38429374-d655-434a-aa1e-07c3c08373f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b211a0bc-a40f-42fd-99c0-d9e1efc80207","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"875b21f0-ffee-4304-96c2-0288cb52ef78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-410129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-410129
--- PASS: TestErrorJSONOutput (0.46s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-041081 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-041081 --network=: (24.045745954s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-041081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-041081
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-041081: (2.726148516s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.84s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-514023 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-514023 --network=bridge: (24.609143029s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-514023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-514023
E0323 23:07:40.144175   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
E0323 23:07:40.149492   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
E0323 23:07:40.159754   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
E0323 23:07:40.180083   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
E0323 23:07:40.220368   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
E0323 23:07:40.300705   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
E0323 23:07:40.461185   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
E0323 23:07:40.781788   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
E0323 23:07:41.422633   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-514023: (2.498746965s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.18s)

                                                
                                    
x
+
TestKicExistingNetwork (27.53s)

                                                
                                                
=== RUN   TestKicExistingNetwork
E0323 23:07:42.703520   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-825177 --network=existing-network
E0323 23:07:45.264715   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
E0323 23:07:50.385587   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
E0323 23:07:51.550716   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
E0323 23:08:00.626116   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-825177 --network=existing-network: (24.567375421s)
helpers_test.go:175: Cleaning up "existing-network-825177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-825177
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-825177: (2.51860448s)
--- PASS: TestKicExistingNetwork (27.53s)

                                                
                                    
x
+
TestKicCustomSubnet (27.07s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-623438 --subnet=192.168.60.0/24
E0323 23:08:19.235443   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
E0323 23:08:21.106513   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-623438 --subnet=192.168.60.0/24: (24.334162248s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-623438 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-623438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-623438
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-623438: (2.671272804s)
--- PASS: TestKicCustomSubnet (27.07s)

                                                
                                    
x
+
TestKicStaticIP (27.1s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-571084 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-571084 --static-ip=192.168.200.200: (24.131561064s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-571084 ip
helpers_test.go:175: Cleaning up "static-ip-571084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-571084
E0323 23:09:02.067386   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-571084: (2.726215082s)
--- PASS: TestKicStaticIP (27.10s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (56.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-419139 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-419139 --driver=docker  --container-runtime=docker: (25.385365216s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-422909 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-422909 --driver=docker  --container-runtime=docker: (24.274183975s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-419139
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-422909
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-422909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-422909
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-422909: (2.780093697s)
helpers_test.go:175: Cleaning up "first-419139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-419139
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-419139: (2.799875357s)
--- PASS: TestMinikubeProfile (56.99s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-677755 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-677755 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.336629194s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.46s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-677755 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-702103 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0323 23:10:15.732362   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
E0323 23:10:15.737634   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
E0323 23:10:15.747877   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
E0323 23:10:15.768191   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
E0323 23:10:15.808521   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
E0323 23:10:15.888935   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
E0323 23:10:16.049346   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
E0323 23:10:16.370013   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-702103 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.883530095s)
E0323 23:10:17.010197   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountSecond (7.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.47s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-702103 ssh -- ls /minikube-host
E0323 23:10:18.291219   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountSecond (0.47s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.16s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-677755 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-677755 --alsologtostderr -v=5: (2.162516435s)
--- PASS: TestMountStart/serial/DeleteFirst (2.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.46s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-702103 ssh -- ls /minikube-host
E0323 23:10:20.852189   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.46s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-702103
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-702103: (1.396681554s)
--- PASS: TestMountStart/serial/Stop (1.40s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.35s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-702103
E0323 23:10:23.987823   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
E0323 23:10:25.973183   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-702103: (8.345902435s)
--- PASS: TestMountStart/serial/RestartStopped (9.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.46s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-702103 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-716374 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0323 23:10:36.213544   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
E0323 23:10:56.694666   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
E0323 23:11:37.655482   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-716374 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m10.350207578s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (43.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-716374 -- rollout status deployment/busybox: (2.750502691s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- exec busybox-6b86dd6d48-6v7c8 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- exec busybox-6b86dd6d48-jqvdj -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- exec busybox-6b86dd6d48-6v7c8 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- exec busybox-6b86dd6d48-jqvdj -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- exec busybox-6b86dd6d48-6v7c8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- exec busybox-6b86dd6d48-jqvdj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (43.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- exec busybox-6b86dd6d48-6v7c8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- exec busybox-6b86dd6d48-6v7c8 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- exec busybox-6b86dd6d48-jqvdj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716374 -- exec busybox-6b86dd6d48-jqvdj -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-716374 -v 3 --alsologtostderr
E0323 23:12:40.143993   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-716374 -v 3 --alsologtostderr: (17.201398332s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 status --alsologtostderr
multinode_test.go:116: (dbg) Done: out/minikube-linux-amd64 -p multinode-716374 status --alsologtostderr: (1.23760859s)
--- PASS: TestMultiNode/serial/AddNode (18.44s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.51s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (16.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Done: out/minikube-linux-amd64 -p multinode-716374 status --output json --alsologtostderr: (1.114532172s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 cp testdata/cp-test.txt multinode-716374:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 cp multinode-716374:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile924466575/001/cp-test_multinode-716374.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374 "sudo cat /home/docker/cp-test.txt"
E0323 23:12:51.550277   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 cp multinode-716374:/home/docker/cp-test.txt multinode-716374-m02:/home/docker/cp-test_multinode-716374_multinode-716374-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374-m02 "sudo cat /home/docker/cp-test_multinode-716374_multinode-716374-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 cp multinode-716374:/home/docker/cp-test.txt multinode-716374-m03:/home/docker/cp-test_multinode-716374_multinode-716374-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374-m03 "sudo cat /home/docker/cp-test_multinode-716374_multinode-716374-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 cp testdata/cp-test.txt multinode-716374-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 cp multinode-716374-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile924466575/001/cp-test_multinode-716374-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 cp multinode-716374-m02:/home/docker/cp-test.txt multinode-716374:/home/docker/cp-test_multinode-716374-m02_multinode-716374.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374 "sudo cat /home/docker/cp-test_multinode-716374-m02_multinode-716374.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 cp multinode-716374-m02:/home/docker/cp-test.txt multinode-716374-m03:/home/docker/cp-test_multinode-716374-m02_multinode-716374-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374-m02 "sudo cat /home/docker/cp-test.txt"
E0323 23:12:59.575879   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374-m03 "sudo cat /home/docker/cp-test_multinode-716374-m02_multinode-716374-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 cp testdata/cp-test.txt multinode-716374-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 cp multinode-716374-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile924466575/001/cp-test_multinode-716374-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 cp multinode-716374-m03:/home/docker/cp-test.txt multinode-716374:/home/docker/cp-test_multinode-716374-m03_multinode-716374.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374 "sudo cat /home/docker/cp-test_multinode-716374-m03_multinode-716374.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 cp multinode-716374-m03:/home/docker/cp-test.txt multinode-716374-m02:/home/docker/cp-test_multinode-716374-m03_multinode-716374-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 ssh -n multinode-716374-m02 "sudo cat /home/docker/cp-test_multinode-716374-m03_multinode-716374-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (16.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-716374 node stop m03: (1.418963259s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 status
E0323 23:13:07.828693   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-716374 status: exit status 7 (874.741547ms)

                                                
                                                
-- stdout --
	multinode-716374
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-716374-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-716374-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-716374 status --alsologtostderr: exit status 7 (889.857865ms)

                                                
                                                
-- stdout --
	multinode-716374
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-716374-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-716374-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0323 23:13:07.993952  240756 out.go:296] Setting OutFile to fd 1 ...
	I0323 23:13:07.994114  240756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:13:07.994124  240756 out.go:309] Setting ErrFile to fd 2...
	I0323 23:13:07.994128  240756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:13:07.994248  240756 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16143-62012/.minikube/bin
	I0323 23:13:07.994443  240756 out.go:303] Setting JSON to false
	I0323 23:13:07.994471  240756 mustload.go:65] Loading cluster: multinode-716374
	I0323 23:13:07.994627  240756 notify.go:220] Checking for updates...
	I0323 23:13:07.994840  240756 config.go:182] Loaded profile config "multinode-716374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:13:07.994856  240756 status.go:255] checking status of multinode-716374 ...
	I0323 23:13:07.995256  240756 cli_runner.go:164] Run: docker container inspect multinode-716374 --format={{.State.Status}}
	I0323 23:13:08.066370  240756 status.go:330] multinode-716374 host status = "Running" (err=<nil>)
	I0323 23:13:08.066408  240756 host.go:66] Checking if "multinode-716374" exists ...
	I0323 23:13:08.066673  240756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-716374
	I0323 23:13:08.138220  240756 host.go:66] Checking if "multinode-716374" exists ...
	I0323 23:13:08.138485  240756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0323 23:13:08.138525  240756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-716374
	I0323 23:13:08.206639  240756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/multinode-716374/id_rsa Username:docker}
	I0323 23:13:08.290533  240756 ssh_runner.go:195] Run: systemctl --version
	I0323 23:13:08.294791  240756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0323 23:13:08.305209  240756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0323 23:13:08.439494  240756 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:42 SystemTime:2023-03-23 23:13:08.429159428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0323 23:13:08.440019  240756 kubeconfig.go:92] found "multinode-716374" server: "https://192.168.58.2:8443"
	I0323 23:13:08.440039  240756 api_server.go:165] Checking apiserver status ...
	I0323 23:13:08.440071  240756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0323 23:13:08.449584  240756 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2089/cgroup
	I0323 23:13:08.456886  240756 api_server.go:181] apiserver freezer: "7:freezer:/docker/9c2343c0c475d0c855695be5de929fb353159f73ce696cd0829dfb0c8b844163/kubepods/burstable/podb1d4124a33090ebcf79dd739d9fdaad9/752ef47cbad386403ef12e0ce0f1a295a32be6f2e29b5c0e1821fbd01e859d6b"
	I0323 23:13:08.456954  240756 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9c2343c0c475d0c855695be5de929fb353159f73ce696cd0829dfb0c8b844163/kubepods/burstable/podb1d4124a33090ebcf79dd739d9fdaad9/752ef47cbad386403ef12e0ce0f1a295a32be6f2e29b5c0e1821fbd01e859d6b/freezer.state
	I0323 23:13:08.463662  240756 api_server.go:203] freezer state: "THAWED"
	I0323 23:13:08.463689  240756 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0323 23:13:08.469202  240756 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0323 23:13:08.469235  240756 status.go:421] multinode-716374 apiserver status = Running (err=<nil>)
	I0323 23:13:08.469246  240756 status.go:257] multinode-716374 status: &{Name:multinode-716374 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0323 23:13:08.469269  240756 status.go:255] checking status of multinode-716374-m02 ...
	I0323 23:13:08.469573  240756 cli_runner.go:164] Run: docker container inspect multinode-716374-m02 --format={{.State.Status}}
	I0323 23:13:08.538368  240756 status.go:330] multinode-716374-m02 host status = "Running" (err=<nil>)
	I0323 23:13:08.538393  240756 host.go:66] Checking if "multinode-716374-m02" exists ...
	I0323 23:13:08.538677  240756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-716374-m02
	I0323 23:13:08.609338  240756 host.go:66] Checking if "multinode-716374-m02" exists ...
	I0323 23:13:08.609663  240756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0323 23:13:08.609709  240756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-716374-m02
	I0323 23:13:08.676783  240756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/multinode-716374-m02/id_rsa Username:docker}
	I0323 23:13:08.761973  240756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0323 23:13:08.770970  240756 status.go:257] multinode-716374-m02 status: &{Name:multinode-716374-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0323 23:13:08.771006  240756 status.go:255] checking status of multinode-716374-m03 ...
	I0323 23:13:08.771287  240756 cli_runner.go:164] Run: docker container inspect multinode-716374-m03 --format={{.State.Status}}
	I0323 23:13:08.840819  240756 status.go:330] multinode-716374-m03 host status = "Stopped" (err=<nil>)
	I0323 23:13:08.840850  240756 status.go:343] host is not running, skipping remaining checks
	I0323 23:13:08.840863  240756 status.go:257] multinode-716374-m03 status: &{Name:multinode-716374-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-716374 node start m03 --alsologtostderr: (11.863738124s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 status
multinode_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p multinode-716374 status: (1.110020278s)
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (95.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-716374
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-716374
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-716374: (22.981397781s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-716374 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-716374 --wait=true -v=8 --alsologtostderr: (1m12.74200872s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-716374
--- PASS: TestMultiNode/serial/RestartKeepsNodes (95.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-716374 node delete m03: (5.226987971s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 stop
E0323 23:15:15.730565   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-716374 stop: (21.582184911s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-716374 status: exit status 7 (185.454422ms)

                                                
                                                
-- stdout --
	multinode-716374
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-716374-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-716374 status --alsologtostderr: exit status 7 (176.363283ms)

                                                
                                                
-- stdout --
	multinode-716374
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-716374-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0323 23:15:25.782142  262950 out.go:296] Setting OutFile to fd 1 ...
	I0323 23:15:25.782262  262950 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:15:25.782270  262950 out.go:309] Setting ErrFile to fd 2...
	I0323 23:15:25.782275  262950 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0323 23:15:25.782395  262950 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16143-62012/.minikube/bin
	I0323 23:15:25.782560  262950 out.go:303] Setting JSON to false
	I0323 23:15:25.782585  262950 mustload.go:65] Loading cluster: multinode-716374
	I0323 23:15:25.782683  262950 notify.go:220] Checking for updates...
	I0323 23:15:25.783012  262950 config.go:182] Loaded profile config "multinode-716374": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0323 23:15:25.783029  262950 status.go:255] checking status of multinode-716374 ...
	I0323 23:15:25.783507  262950 cli_runner.go:164] Run: docker container inspect multinode-716374 --format={{.State.Status}}
	I0323 23:15:25.852017  262950 status.go:330] multinode-716374 host status = "Stopped" (err=<nil>)
	I0323 23:15:25.852051  262950 status.go:343] host is not running, skipping remaining checks
	I0323 23:15:25.852058  262950 status.go:257] multinode-716374 status: &{Name:multinode-716374 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0323 23:15:25.852086  262950 status.go:255] checking status of multinode-716374-m02 ...
	I0323 23:15:25.852360  262950 cli_runner.go:164] Run: docker container inspect multinode-716374-m02 --format={{.State.Status}}
	I0323 23:15:25.917062  262950 status.go:330] multinode-716374-m02 host status = "Stopped" (err=<nil>)
	I0323 23:15:25.917093  262950 status.go:343] host is not running, skipping remaining checks
	I0323 23:15:25.917099  262950 status.go:257] multinode-716374-m02 status: &{Name:multinode-716374-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (77.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-716374 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0323 23:15:43.416613   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-716374 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m16.970305318s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716374 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (77.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-716374
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-716374-m02 --driver=docker  --container-runtime=docker
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-716374-m02 --driver=docker  --container-runtime=docker: exit status 14 (61.665244ms)

                                                
                                                
-- stdout --
	* [multinode-716374-m02] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-716374-m02' is duplicated with machine name 'multinode-716374-m02' in profile 'multinode-716374'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-716374-m03 --driver=docker  --container-runtime=docker
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-716374-m03 --driver=docker  --container-runtime=docker: (24.528226628s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-716374
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-716374: exit status 80 (424.086387ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-716374
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-716374-m03 already exists in multinode-716374-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-716374-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-716374-m03: (2.631527921s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.69s)

                                                
                                    
x
+
TestPreload (114.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-592054 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0323 23:17:40.145115   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
E0323 23:17:51.550825   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-592054 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (51.45213364s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-592054 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-592054 -- docker pull gcr.io/k8s-minikube/busybox: (1.383324523s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-592054
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-592054: (10.91783314s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-592054 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-592054 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (47.441800991s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-592054 -- docker images
helpers_test.go:175: Cleaning up "test-preload-592054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-592054
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-592054: (2.780893706s)
--- PASS: TestPreload (114.44s)

                                                
                                    
x
+
TestScheduledStopUnix (97.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-535029 --memory=2048 --driver=docker  --container-runtime=docker
E0323 23:19:14.597377   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-535029 --memory=2048 --driver=docker  --container-runtime=docker: (23.789680203s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-535029 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-535029 -n scheduled-stop-535029
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-535029 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-535029 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-535029 -n scheduled-stop-535029
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-535029
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-535029 --schedule 15s
E0323 23:20:15.732380   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-535029
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-535029: exit status 7 (113.366347ms)

                                                
                                                
-- stdout --
	scheduled-stop-535029
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-535029 -n scheduled-stop-535029
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-535029 -n scheduled-stop-535029: exit status 7 (112.742939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-535029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-535029
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-535029: (2.248504948s)
--- PASS: TestScheduledStopUnix (97.99s)

                                                
                                    
x
+
TestSkaffold (57.06s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1967724733 version
skaffold_test.go:63: skaffold version: v2.2.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-701809 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-701809 --memory=2600 --driver=docker  --container-runtime=docker: (23.478317006s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1967724733 run --minikube-profile skaffold-701809 --kube-context skaffold-701809 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1967724733 run --minikube-profile skaffold-701809 --kube-context skaffold-701809 --status-check=true --port-forward=false --interactive=false: (19.95306051s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5f675b49c5-b7ww9" [05dec1de-a835-4611-b8d7-3ec6a1cd1962] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011940142s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-58447df998-cqtn4" [90dc0bdd-bf8d-49eb-947f-b94ba499a99a] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006022389s
helpers_test.go:175: Cleaning up "skaffold-701809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-701809
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-701809: (2.960568328s)
--- PASS: TestSkaffold (57.06s)

                                                
                                    
x
+
TestInsufficientStorage (13.11s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-439060 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-439060 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.884026332s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a25e0515-5376-4384-b17d-68086564cfb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-439060] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e5a4248-f8c0-4d60-bed3-f4260faf84d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16143"}}
	{"specversion":"1.0","id":"02dba7f1-6995-4837-8899-aa50a7cbc643","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e66a0664-ed46-4394-a67a-93feb12184f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig"}}
	{"specversion":"1.0","id":"ef34e15d-c8df-4392-a52b-ec2bdd8bc6c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube"}}
	{"specversion":"1.0","id":"6dd773bc-bcd5-448e-9570-c536a5aea05f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e73fa8ed-a9ea-46dd-8d71-d1ef4c290ea9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f0103d3b-1b7a-4367-92bc-1388c52e9277","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3f10b5ef-70f2-4540-a22b-205e249c6ad4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"048d81d9-92d8-4904-9f9e-8b7700d82573","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f309a521-8ec3-4077-9538-894245d08fa5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"306a3a36-788c-4b98-a2bd-33a93b2cada8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-439060 in cluster insufficient-storage-439060","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d0c9a726-06ee-457c-b834-527650d44d60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"507a7dcf-1128-4f1c-b73a-813a5cef4071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7addc594-79d5-4a6f-8df6-7e36ca918274","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-439060 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-439060 --output=json --layout=cluster: exit status 7 (467.831901ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-439060","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-439060","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0323 23:21:56.440717  311617 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-439060" does not appear in /home/jenkins/minikube-integration/16143-62012/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-439060 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-439060 --output=json --layout=cluster: exit status 7 (452.192728ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-439060","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-439060","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0323 23:21:56.892993  311813 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-439060" does not appear in /home/jenkins/minikube-integration/16143-62012/kubeconfig
	E0323 23:21:56.901548  311813 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/insufficient-storage-439060/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-439060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-439060
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-439060: (2.302686513s)
--- PASS: TestInsufficientStorage (13.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.9.0.1003879020.exe start -p running-upgrade-678999 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.9.0.1003879020.exe start -p running-upgrade-678999 --memory=2200 --vm-driver=docker  --container-runtime=docker: (34.34881659s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-678999 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0323 23:25:15.731067   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-678999 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.474500068s)
helpers_test.go:175: Cleaning up "running-upgrade-678999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-678999
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-678999: (2.320743033s)
--- PASS: TestRunningBinaryUpgrade (70.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (342.44s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-120624 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-120624 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.576405892s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-120624
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-120624: (1.545670374s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-120624 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-120624 status --format={{.Host}}: exit status 7 (150.348104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-120624 --memory=2200 --kubernetes-version=v1.27.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-120624 --memory=2200 --kubernetes-version=v1.27.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m32.695161783s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-120624 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-120624 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-120624 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (68.586866ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-120624] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.0-beta.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-120624
	    minikube start -p kubernetes-upgrade-120624 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1206242 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-120624 --kubernetes-version=v1.27.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-120624 --memory=2200 --kubernetes-version=v1.27.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-120624 --memory=2200 --kubernetes-version=v1.27.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (20.190069372s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-120624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-120624
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-120624: (3.154551753s)
--- PASS: TestKubernetesUpgrade (342.44s)

                                                
                                    
x
+
TestMissingContainerUpgrade (137.51s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
E0323 23:22:40.143657   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Run:  /tmp/minikube-v1.9.1.1410261960.exe start -p missing-upgrade-105225 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:317: (dbg) Done: /tmp/minikube-v1.9.1.1410261960.exe start -p missing-upgrade-105225 --memory=2200 --driver=docker  --container-runtime=docker: (1m15.738484481s)
version_upgrade_test.go:326: (dbg) Run:  docker stop missing-upgrade-105225
version_upgrade_test.go:326: (dbg) Done: docker stop missing-upgrade-105225: (10.709193117s)
version_upgrade_test.go:331: (dbg) Run:  docker rm missing-upgrade-105225
version_upgrade_test.go:337: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-105225 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:337: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-105225 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (47.923435536s)
helpers_test.go:175: Cleaning up "missing-upgrade-105225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-105225
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-105225: (2.675432959s)
--- PASS: TestMissingContainerUpgrade (137.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-471308 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-471308 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (85.291491ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-471308] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16143
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-471308 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-471308 --driver=docker  --container-runtime=docker: (37.583760855s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-471308 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-471308 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-471308 --no-kubernetes --driver=docker  --container-runtime=docker: (5.945986588s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-471308 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-471308 status -o json: exit status 2 (682.728944ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-471308","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-471308
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-471308: (3.635508744s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-471308 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-471308 --no-kubernetes --driver=docker  --container-runtime=docker: (10.613257116s)
--- PASS: TestNoKubernetes/serial/Start (10.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-471308 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-471308 "sudo systemctl is-active --quiet service kubelet": exit status 1 (610.472185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.093289654s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-471308
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-471308: (1.844393375s)
--- PASS: TestNoKubernetes/serial/Stop (1.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-471308 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-471308 --driver=docker  --container-runtime=docker: (9.558729269s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-471308 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-471308 "sudo systemctl is-active --quiet service kubelet": exit status 1 (549.054471ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (67.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.9.0.750728625.exe start -p stopped-upgrade-629152 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.9.0.750728625.exe start -p stopped-upgrade-629152 --memory=2200 --vm-driver=docker  --container-runtime=docker: (44.676020924s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.9.0.750728625.exe -p stopped-upgrade-629152 stop
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.9.0.750728625.exe -p stopped-upgrade-629152 stop: (2.547650832s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-629152 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0323 23:24:03.188955   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-629152 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (20.359431432s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (67.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-629152
version_upgrade_test.go:214: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-629152: (1.532193627s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.53s)

                                                
                                    
x
+
TestPause/serial/Start (41.3s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-574316 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-574316 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (41.297217779s)
--- PASS: TestPause/serial/Start (41.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (109.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-063647 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0323 23:26:43.350854   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-063647 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (1m49.04728016s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (109.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-775322 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-775322 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.0-beta.0: (50.560479386s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-898782 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.3
E0323 23:27:14.072395   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
E0323 23:27:40.144424   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-898782 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.3: (45.868040127s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-775322 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5172af9b-b1d7-4f35-b5de-e86fcda245c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5172af9b-b1d7-4f35-b5de-e86fcda245c2] Running
E0323 23:27:51.550662   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.013652036s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-775322 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-898782 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c3405737-bacb-4f60-a044-421a20525235] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0323 23:27:55.032647   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c3405737-bacb-4f60-a044-421a20525235] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.013152692s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-898782 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-775322 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-775322 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-775322 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-775322 --alsologtostderr -v=3: (11.126030254s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-898782 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-898782 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-898782 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-898782 --alsologtostderr -v=3: (10.944201643s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-775322 -n no-preload-775322
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-775322 -n no-preload-775322: exit status 7 (128.652882ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-775322 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (563.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-775322 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-775322 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.0-beta.0: (9m22.781871356s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-775322 -n no-preload-775322
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (563.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-898782 -n embed-certs-898782
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-898782 -n embed-certs-898782: exit status 7 (148.416391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-898782 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (322.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-898782 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-898782 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.3: (5m21.532255946s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-898782 -n embed-certs-898782
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (322.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-063647 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9bf23d1a-86f8-49a1-bc31-91c7eb5637ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9bf23d1a-86f8-49a1-bc31-91c7eb5637ba] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.011824351s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-063647 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-063647 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-063647 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-367322 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-367322 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.3: (1m16.8627795s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-063647 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-063647 --alsologtostderr -v=3: (11.041381627s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-063647 -n old-k8s-version-063647
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-063647 -n old-k8s-version-063647: exit status 7 (128.595685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-063647 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (62.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-063647 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0323 23:29:16.953483   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-063647 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (1m1.753765192s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-063647 -n old-k8s-version-063647
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (62.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-st4bj" [ffd7d8e3-9053-43e2-8e71-9f412a051352] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-st4bj" [ffd7d8e3-9053-43e2-8e71-9f412a051352] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.013490333s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-367322 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [55824b4f-5870-46dc-ab73-67df9b20cb06] Pending
helpers_test.go:344: "busybox" [55824b4f-5870-46dc-ab73-67df9b20cb06] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [55824b4f-5870-46dc-ab73-67df9b20cb06] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.012368627s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-367322 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-st4bj" [ffd7d8e3-9053-43e2-8e71-9f412a051352] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006179959s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-063647 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-367322 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-367322 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-367322 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-367322 --alsologtostderr -v=3: (10.989629219s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-063647 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-063647 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-063647 -n old-k8s-version-063647
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-063647 -n old-k8s-version-063647: exit status 2 (511.183429ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-063647 -n old-k8s-version-063647
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-063647 -n old-k8s-version-063647: exit status 2 (499.362826ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-063647 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-063647 -n old-k8s-version-063647
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-063647 -n old-k8s-version-063647
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-367322 -n default-k8s-diff-port-367322
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-367322 -n default-k8s-diff-port-367322: exit status 7 (128.816942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-367322 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (561.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-367322 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-367322 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.3: (9m21.236754562s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-367322 -n default-k8s-diff-port-367322
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (561.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-648543 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-648543 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.0-beta.0: (40.116341431s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-648543 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-648543 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-648543 --alsologtostderr -v=3: (11.004037173s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-648543 -n newest-cni-648543
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-648543 -n newest-cni-648543: exit status 7 (117.895809ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-648543 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-648543 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.0-beta.0
E0323 23:31:33.110052   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-648543 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.27.0-beta.0: (28.863995193s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-648543 -n newest-cni-648543
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-648543 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-648543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-648543 -n newest-cni-648543
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-648543 -n newest-cni-648543: exit status 2 (529.580995ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-648543 -n newest-cni-648543
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-648543 -n newest-cni-648543: exit status 2 (503.445453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-648543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-648543 -n newest-cni-648543
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-648543 -n newest-cni-648543
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0323 23:32:00.794630   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (46.89351153s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-452361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-452361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-wq5pc" [de71c5d8-2381-409d-9f40-e469d7d7fb4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-wq5pc" [de71c5d8-2381-409d-9f40-e469d7d7fb4b] Running
E0323 23:32:40.143760   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005881288s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-452361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0323 23:33:29.314798   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
E0323 23:33:29.320074   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
E0323 23:33:29.330317   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
E0323 23:33:29.350565   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
E0323 23:33:29.390977   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
E0323 23:33:29.471919   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
E0323 23:33:29.632705   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
E0323 23:33:29.953335   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
E0323 23:33:30.594291   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
E0323 23:33:31.874995   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
E0323 23:33:34.436097   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (55.393361334s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-k5r6b" [6372909e-354a-49cf-9bca-3cfbf377fd90] Running
E0323 23:33:39.556995   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011988147s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-k5r6b" [6372909e-354a-49cf-9bca-3cfbf377fd90] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006523257s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-898782 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-898782 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-898782 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-898782 -n embed-certs-898782
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-898782 -n embed-certs-898782: exit status 2 (513.528237ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-898782 -n embed-certs-898782
E0323 23:33:49.798194   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-898782 -n embed-certs-898782: exit status 2 (601.453889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-898782 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-898782 -n embed-certs-898782
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-898782 -n embed-certs-898782
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m11.02017567s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-w75pg" [c52e3b49-ba79-43c7-a48e-d39849b12c0c] Running
E0323 23:34:10.279003   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013539327s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-452361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-452361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-twvq4" [7dba9de7-4d75-433d-93c1-e52170f541ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-twvq4" [7dba9de7-4d75-433d-93c1-e52170f541ec] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006711961s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-452361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (55.120653848s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wdx4b" [969ee9ea-7e30-43c2-948f-c3f435c9271d] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.016609415s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-452361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-452361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-w7b2j" [7b95a20a-cb88-43d8-a9c2-3eb50371c605] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0323 23:35:15.730647   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/ingress-addon-legacy-644273/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-w7b2j" [7b95a20a-cb88-43d8-a9c2-3eb50371c605] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.006388482s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-452361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-452361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-452361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-fjfsx" [0883fd6a-48bc-4a19-86e7-cee37642763f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-fjfsx" [0883fd6a-48bc-4a19-86e7-cee37642763f] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.007572986s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (44.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p false-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0323 23:35:54.597574   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p false-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (44.789464923s)
--- PASS: TestNetworkPlugins/group/false/Start (44.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-452361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (48.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0323 23:36:33.110130   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (48.329581049s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (48.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-452361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-452361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-9qsmj" [4eb6647d-0347-40c6-b346-52bba82331c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-9qsmj" [4eb6647d-0347-40c6-b346-52bba82331c0] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.006964945s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-452361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (56.726603132s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-452361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-452361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-bmckd" [497039dc-238a-406d-8f2e-0781050008b9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-bmckd" [497039dc-238a-406d-8f2e-0781050008b9] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.006764531s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-452361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-rwk5n" [6e085a23-422a-4ba9-8051-a64857daeca9] Running
E0323 23:37:33.997154   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
E0323 23:37:34.002443   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
E0323 23:37:34.012702   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
E0323 23:37:34.032923   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
E0323 23:37:34.073237   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
E0323 23:37:34.153543   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
E0323 23:37:34.313946   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
E0323 23:37:34.634758   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
E0323 23:37:35.275127   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016556435s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-rwk5n" [6e085a23-422a-4ba9-8051-a64857daeca9] Running
E0323 23:37:36.557332   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
E0323 23:37:39.117834   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
E0323 23:37:40.144542   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/functional-378114/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007368099s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-775322 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-775322 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-775322 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-775322 -n no-preload-775322
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-775322 -n no-preload-775322: exit status 2 (542.907554ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-775322 -n no-preload-775322
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-775322 -n no-preload-775322: exit status 2 (552.942901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-775322 --alsologtostderr -v=1
E0323 23:37:44.238728   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-775322 -n no-preload-775322
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-775322 -n no-preload-775322
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (53.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0323 23:37:51.550100   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/addons-213626/client.crt: no such file or directory
E0323 23:37:54.479945   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (53.661606303s)
--- PASS: TestNetworkPlugins/group/bridge/Start (53.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (41.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-452361 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (41.540756448s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (41.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ztrr7" [d6010d22-98ea-461c-bb0b-d431d5bf1ad5] Running
E0323 23:38:14.960686   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/auto-452361/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.016453849s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-452361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-452361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-vdqrz" [15899491-ce06-4d61-a909-eac037e9443e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-vdqrz" [15899491-ce06-4d61-a909-eac037e9443e] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00632507s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-452361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-452361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-452361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-7n5wd" [f4a36b70-4ede-40f4-8577-b8c5f7792e21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-7n5wd" [f4a36b70-4ede-40f4-8577-b8c5f7792e21] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.006918676s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-452361 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-452361 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-wwjlk" [dda8b3c6-cf2b-4fe5-bd6e-6d9d5b6a2536] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-wwjlk" [dda8b3c6-cf2b-4fe5-bd6e-6d9d5b6a2536] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.007744063s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-452361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-452361 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-452361 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E0323 23:39:27.740310   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/kindnet-452361/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-rk845" [e1c4e3d2-acc9-458a-b674-9c34215f375a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013134923s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-rk845" [e1c4e3d2-acc9-458a-b674-9c34215f375a] Running
E0323 23:39:48.220863   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/kindnet-452361/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006470906s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-367322 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-367322 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-367322 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-367322 -n default-k8s-diff-port-367322
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-367322 -n default-k8s-diff-port-367322: exit status 2 (481.518083ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-367322 -n default-k8s-diff-port-367322
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-367322 -n default-k8s-diff-port-367322: exit status 2 (494.5372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-367322 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-367322 -n default-k8s-diff-port-367322
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-367322 -n default-k8s-diff-port-367322
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.48s)

                                                
                                    

Test skip (22/313)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-beta.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-beta.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-beta.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-403360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-403360
--- SKIP: TestStartStop/group/disable-driver-mounts (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
E0323 23:26:38.230654   68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
panic.go:522: 
----------------------- debugLogs start: cilium-452361 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-452361" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 23 Mar 2023 23:23:56 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-120624
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 23 Mar 2023 23:25:34 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-574316
contexts:
- context:
cluster: kubernetes-upgrade-120624
user: kubernetes-upgrade-120624
name: kubernetes-upgrade-120624
- context:
cluster: pause-574316
extensions:
- extension:
last-update: Thu, 23 Mar 2023 23:25:34 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: pause-574316
name: pause-574316
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-120624
user:
client-certificate: /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/kubernetes-upgrade-120624/client.crt
client-key: /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/kubernetes-upgrade-120624/client.key
- name: pause-574316
user:
client-certificate: /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.crt
client-key: /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-452361

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-452361" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452361"

                                                
                                                
----------------------- debugLogs end: cilium-452361 [took: 3.314751213s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-452361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-452361
--- SKIP: TestNetworkPlugins/group/cilium (3.77s)

                                                
                                    
Copied to clipboard