Test Report: Docker_Linux 15310

                    
                      af24d50c21096344c09c5fff0b9181d55a181bf0:2022-11-07:26449
                    
                

Test fail (6/277)

Order failed test Duration
251 TestPause/serial/SecondStartNoReconfiguration 51.24
264 TestNetworkPlugins/group/calico/Start 522.46
268 TestNetworkPlugins/group/false/DNS 280.42
278 TestNetworkPlugins/group/bridge/DNS 372.39
283 TestNetworkPlugins/group/kubenet/DNS 360.27
286 TestNetworkPlugins/group/enable-default-cni/DNS 334.24
x
+
TestPause/serial/SecondStartNoReconfiguration (51.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-171530 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1107 17:17:06.410751   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:17:16.651759   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-171530 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.559158423s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-171530] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node pause-171530 in cluster pause-171530
	* Pulling base image ...
	* Updating the running docker "pause-171530" container ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Done! kubectl is now configured to use "pause-171530" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 17:17:05.571294  265599 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:17:05.571401  265599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:17:05.571412  265599 out.go:309] Setting ErrFile to fd 2...
	I1107 17:17:05.571416  265599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:17:05.571524  265599 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-3679/.minikube/bin
	I1107 17:17:05.572110  265599 out.go:303] Setting JSON to false
	I1107 17:17:05.573931  265599 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3577,"bootTime":1667837849,"procs":1072,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 17:17:05.573997  265599 start.go:126] virtualization: kvm guest
	I1107 17:17:05.576735  265599 out.go:177] * [pause-171530] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 17:17:05.578493  265599 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 17:17:05.578464  265599 notify.go:220] Checking for updates...
	I1107 17:17:05.579960  265599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 17:17:05.581495  265599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	I1107 17:17:05.583272  265599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	I1107 17:17:05.584752  265599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 17:17:05.586943  265599 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:05.587399  265599 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 17:17:05.619996  265599 docker.go:137] docker version: linux-20.10.21
	I1107 17:17:05.620105  265599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:17:05.724537  265599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:54 SystemTime:2022-11-07 17:17:05.642826795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:17:05.724669  265599 docker.go:254] overlay module found
	I1107 17:17:05.726899  265599 out.go:177] * Using the docker driver based on existing profile
	I1107 17:17:05.728247  265599 start.go:282] selected driver: docker
	I1107 17:17:05.728267  265599 start.go:808] validating driver "docker" against &{Name:pause-171530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:pause-171530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:17:05.728376  265599 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 17:17:05.728459  265599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:17:05.834872  265599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:54 SystemTime:2022-11-07 17:17:05.751080253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:17:05.835529  265599 cni.go:95] Creating CNI manager for ""
	I1107 17:17:05.835549  265599 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 17:17:05.835565  265599 start_flags.go:317] config:
	{Name:pause-171530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:pause-171530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: F
eatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet
_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:17:05.839858  265599 out.go:177] * Starting control plane node pause-171530 in cluster pause-171530
	I1107 17:17:05.841800  265599 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 17:17:05.844023  265599 out.go:177] * Pulling base image ...
	I1107 17:17:05.845691  265599 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 17:17:05.845756  265599 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 17:17:05.845776  265599 cache.go:57] Caching tarball of preloaded images
	I1107 17:17:05.845787  265599 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 17:17:05.846094  265599 preload.go:174] Found /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 17:17:05.846111  265599 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 17:17:05.846271  265599 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/config.json ...
	I1107 17:17:05.873255  265599 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 17:17:05.873279  265599 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 17:17:05.873289  265599 cache.go:208] Successfully downloaded all kic artifacts
	I1107 17:17:05.873322  265599 start.go:364] acquiring machines lock for pause-171530: {Name:mk2020e0b0b9cf87e78302c105d3589b81431a7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 17:17:05.873408  265599 start.go:368] acquired machines lock for "pause-171530" in 65.893µs
	I1107 17:17:05.873440  265599 start.go:96] Skipping create...Using existing machine configuration
	I1107 17:17:05.873452  265599 fix.go:55] fixHost starting: 
	I1107 17:17:05.873695  265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
	I1107 17:17:05.904476  265599 fix.go:103] recreateIfNeeded on pause-171530: state=Running err=<nil>
	W1107 17:17:05.904506  265599 fix.go:129] unexpected machine state, will restart: <nil>
	I1107 17:17:05.907141  265599 out.go:177] * Updating the running docker "pause-171530" container ...
	I1107 17:17:05.908814  265599 machine.go:88] provisioning docker machine ...
	I1107 17:17:05.908865  265599 ubuntu.go:169] provisioning hostname "pause-171530"
	I1107 17:17:05.908920  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:05.936340  265599 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:05.936536  265599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I1107 17:17:05.936554  265599 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-171530 && echo "pause-171530" | sudo tee /etc/hostname
	I1107 17:17:06.063498  265599 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-171530
	
	I1107 17:17:06.063580  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:06.089781  265599 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:06.089944  265599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I1107 17:17:06.089970  265599 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-171530' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-171530/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-171530' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 17:17:06.206772  265599 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 17:17:06.206802  265599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15310-3679/.minikube CaCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15310-3679/.minikube}
	I1107 17:17:06.206824  265599 ubuntu.go:177] setting up certificates
	I1107 17:17:06.206833  265599 provision.go:83] configureAuth start
	I1107 17:17:06.206876  265599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-171530
	I1107 17:17:06.233722  265599 provision.go:138] copyHostCerts
	I1107 17:17:06.233807  265599 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem, removing ...
	I1107 17:17:06.233825  265599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem
	I1107 17:17:06.233906  265599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem (1082 bytes)
	I1107 17:17:06.233996  265599 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem, removing ...
	I1107 17:17:06.234011  265599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem
	I1107 17:17:06.234043  265599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem (1123 bytes)
	I1107 17:17:06.234121  265599 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem, removing ...
	I1107 17:17:06.234137  265599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem
	I1107 17:17:06.234180  265599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem (1675 bytes)
	I1107 17:17:06.234287  265599 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem org=jenkins.pause-171530 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube pause-171530]
	I1107 17:17:06.399357  265599 provision.go:172] copyRemoteCerts
	I1107 17:17:06.399419  265599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 17:17:06.399460  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:06.426411  265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
	I1107 17:17:06.514347  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 17:17:06.533219  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1107 17:17:06.553157  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 17:17:06.573469  265599 provision.go:86] duration metric: configureAuth took 366.618787ms
	I1107 17:17:06.573508  265599 ubuntu.go:193] setting minikube options for container-runtime
	I1107 17:17:06.573739  265599 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:06.573831  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:06.601570  265599 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:06.601719  265599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I1107 17:17:06.601733  265599 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 17:17:06.719151  265599 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 17:17:06.719182  265599 ubuntu.go:71] root file system type: overlay
	I1107 17:17:06.719350  265599 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 17:17:06.719411  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:06.746626  265599 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:06.746845  265599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I1107 17:17:06.746914  265599 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 17:17:06.872971  265599 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 17:17:06.873051  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:06.902094  265599 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:06.902277  265599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49369 <nil> <nil>}
	I1107 17:17:06.902307  265599 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 17:17:07.027043  265599 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 17:17:07.027081  265599 machine.go:91] provisioned docker machine in 1.118240745s
	I1107 17:17:07.027091  265599 start.go:300] post-start starting for "pause-171530" (driver="docker")
	I1107 17:17:07.027101  265599 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 17:17:07.027157  265599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 17:17:07.027203  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:07.055663  265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
	I1107 17:17:07.152315  265599 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 17:17:07.155419  265599 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 17:17:07.155449  265599 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 17:17:07.155461  265599 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 17:17:07.155469  265599 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 17:17:07.155484  265599 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/addons for local assets ...
	I1107 17:17:07.155537  265599 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/files for local assets ...
	I1107 17:17:07.155621  265599 filesync.go:149] local asset: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem -> 101292.pem in /etc/ssl/certs
	I1107 17:17:07.155717  265599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 17:17:07.163115  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /etc/ssl/certs/101292.pem (1708 bytes)
	I1107 17:17:07.259375  265599 start.go:303] post-start completed in 232.268718ms
	I1107 17:17:07.259457  265599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 17:17:07.259504  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:07.292327  265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
	I1107 17:17:07.380341  265599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 17:17:07.384765  265599 fix.go:57] fixHost completed within 1.511308744s
	I1107 17:17:07.384788  265599 start.go:83] releasing machines lock for "pause-171530", held for 1.511368311s
	I1107 17:17:07.384864  265599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-171530
	I1107 17:17:07.413876  265599 ssh_runner.go:195] Run: systemctl --version
	I1107 17:17:07.413938  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:07.413976  265599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 17:17:07.414049  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:07.447827  265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
	I1107 17:17:07.448603  265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
	I1107 17:17:07.565735  265599 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 17:17:07.580677  265599 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 17:17:07.580749  265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 17:17:07.595113  265599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 17:17:07.609542  265599 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 17:17:07.717706  265599 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 17:17:07.844274  265599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:17:07.951423  265599 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 17:17:24.054911  265599 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.103437064s)
	I1107 17:17:24.054984  265599 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 17:17:24.265227  265599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:17:24.361565  265599 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 17:17:24.371575  265599 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 17:17:24.371644  265599 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 17:17:24.374825  265599 start.go:472] Will wait 60s for crictl version
	I1107 17:17:24.374887  265599 ssh_runner.go:195] Run: sudo crictl version
	I1107 17:17:24.405233  265599 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 17:17:24.405294  265599 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 17:17:24.433798  265599 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 17:17:24.469003  265599 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 17:17:24.469098  265599 cli_runner.go:164] Run: docker network inspect pause-171530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 17:17:24.494258  265599 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1107 17:17:24.497966  265599 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 17:17:24.498057  265599 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 17:17:24.522994  265599 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 17:17:24.523018  265599 docker.go:543] Images already preloaded, skipping extraction
	I1107 17:17:24.523070  265599 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 17:17:24.547938  265599 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 17:17:24.547962  265599 cache_images.go:84] Images are preloaded, skipping loading
	I1107 17:17:24.548029  265599 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 17:17:24.624335  265599 cni.go:95] Creating CNI manager for ""
	I1107 17:17:24.624371  265599 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 17:17:24.624381  265599 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 17:17:24.624400  265599 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-171530 NodeName:pause-171530 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 17:17:24.624599  265599 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-171530"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 17:17:24.624734  265599 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-171530 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:pause-171530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 17:17:24.624798  265599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 17:17:24.634057  265599 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 17:17:24.634131  265599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 17:17:24.641098  265599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (474 bytes)
	I1107 17:17:24.654380  265599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 17:17:24.668303  265599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2035 bytes)
	I1107 17:17:24.681843  265599 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1107 17:17:24.685111  265599 certs.go:54] Setting up /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530 for IP: 192.168.85.2
	I1107 17:17:24.685219  265599 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key
	I1107 17:17:24.685293  265599 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key
	I1107 17:17:24.685377  265599 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key
	I1107 17:17:24.685457  265599 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/apiserver.key.43b9df8c
	I1107 17:17:24.685521  265599 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/proxy-client.key
	I1107 17:17:24.685626  265599 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem (1338 bytes)
	W1107 17:17:24.685663  265599 certs.go:384] ignoring /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129_empty.pem, impossibly tiny 0 bytes
	I1107 17:17:24.685686  265599 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 17:17:24.685722  265599 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem (1082 bytes)
	I1107 17:17:24.685755  265599 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem (1123 bytes)
	I1107 17:17:24.685791  265599 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem (1675 bytes)
	I1107 17:17:24.685845  265599 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem (1708 bytes)
	I1107 17:17:24.686475  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 17:17:24.705101  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 17:17:24.724639  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 17:17:24.742006  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 17:17:24.760861  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 17:17:24.780509  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 17:17:24.799673  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 17:17:24.819781  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 17:17:24.839265  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem --> /usr/share/ca-certificates/10129.pem (1338 bytes)
	I1107 17:17:24.857824  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /usr/share/ca-certificates/101292.pem (1708 bytes)
	I1107 17:17:24.876187  265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 17:17:24.894054  265599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 17:17:24.907411  265599 ssh_runner.go:195] Run: openssl version
	I1107 17:17:24.912881  265599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 17:17:24.921594  265599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:17:24.925481  265599 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:17:24.925551  265599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:17:24.930905  265599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 17:17:24.938422  265599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10129.pem && ln -fs /usr/share/ca-certificates/10129.pem /etc/ssl/certs/10129.pem"
	I1107 17:17:24.946334  265599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10129.pem
	I1107 17:17:24.949621  265599 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/10129.pem
	I1107 17:17:24.949680  265599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10129.pem
	I1107 17:17:24.955062  265599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10129.pem /etc/ssl/certs/51391683.0"
	I1107 17:17:24.962592  265599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101292.pem && ln -fs /usr/share/ca-certificates/101292.pem /etc/ssl/certs/101292.pem"
	I1107 17:17:24.970789  265599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101292.pem
	I1107 17:17:24.974091  265599 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/101292.pem
	I1107 17:17:24.974155  265599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101292.pem
	I1107 17:17:24.979020  265599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101292.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 17:17:24.986014  265599 kubeadm.go:396] StartCluster: {Name:pause-171530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:pause-171530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:17:24.986135  265599 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 17:17:25.008611  265599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 17:17:25.015852  265599 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1107 17:17:25.015875  265599 kubeadm.go:627] restartCluster start
	I1107 17:17:25.015912  265599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 17:17:25.022497  265599 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:25.023332  265599 kubeconfig.go:92] found "pause-171530" server: "https://192.168.85.2:8443"
	I1107 17:17:25.024642  265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 17:17:25.025325  265599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 17:17:25.032719  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:25.032770  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:25.041455  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:25.241876  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:25.241954  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:25.253157  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:25.442363  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:25.442456  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:25.451732  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:25.642291  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:25.642376  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:25.653784  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:25.842094  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:25.842176  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:25.851161  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:26.042481  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:26.042570  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:26.051935  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:26.242166  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:26.242259  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:26.252068  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:26.442444  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:26.442521  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:26.452466  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:26.641665  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:26.641756  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:26.651194  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:26.842411  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:26.842497  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:26.852123  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:27.042449  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:27.042520  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:27.051754  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:27.242055  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:27.242125  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:27.252675  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:27.442021  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:27.442086  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:27.452028  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:27.642321  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:27.642396  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:27.655163  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:27.842316  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:27.842406  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:27.919765  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:28.042018  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:28.042091  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:28.066147  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:28.066178  265599 api_server.go:165] Checking apiserver status ...
	I1107 17:17:28.066222  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 17:17:28.134945  265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:28.134975  265599 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1107 17:17:28.134983  265599 kubeadm.go:1114] stopping kube-system containers ...
	I1107 17:17:28.135044  265599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 17:17:28.245468  265599 docker.go:444] Stopping containers: [bc4811d3f9f1 7c093d736ba0 42f2c39561b1 c9629a7195e0 c109021f97b0 cdc8d9ab8c01 6977abb3bdd5 70d021ab7352 509fa11824cf 0d39f99a8173 1ed4b2e0931b 6a6aa007d5d6 72b0bc6dbf86 307735ded540 925a87ac16d6 6d1abd3e30d7 29682c53aad3 c0cb5971f049 69420675fbf2 a947b84a16e9 cd9fbcf66902 0329006f68e6 0c9d8ff11e72]
	I1107 17:17:28.245557  265599 ssh_runner.go:195] Run: docker stop bc4811d3f9f1 7c093d736ba0 42f2c39561b1 c9629a7195e0 c109021f97b0 cdc8d9ab8c01 6977abb3bdd5 70d021ab7352 509fa11824cf 0d39f99a8173 1ed4b2e0931b 6a6aa007d5d6 72b0bc6dbf86 307735ded540 925a87ac16d6 6d1abd3e30d7 29682c53aad3 c0cb5971f049 69420675fbf2 a947b84a16e9 cd9fbcf66902 0329006f68e6 0c9d8ff11e72
	I1107 17:17:28.885183  265599 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 17:17:28.970717  265599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:17:28.980900  265599 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Nov  7 17:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Nov  7 17:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Nov  7 17:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov  7 17:15 /etc/kubernetes/scheduler.conf
	
	I1107 17:17:28.980971  265599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1107 17:17:28.989995  265599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1107 17:17:28.997436  265599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1107 17:17:29.005331  265599 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:29.005401  265599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1107 17:17:29.016147  265599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1107 17:17:29.025419  265599 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1107 17:17:29.025483  265599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1107 17:17:29.034956  265599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 17:17:29.043620  265599 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 17:17:29.043651  265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:17:29.101436  265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:17:29.752261  265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:17:29.918922  265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:17:29.991213  265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:17:30.133407  265599 api_server.go:51] waiting for apiserver process to appear ...
	I1107 17:17:30.133501  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:17:30.646530  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:17:31.146257  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:17:31.646851  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:17:31.727916  265599 api_server.go:71] duration metric: took 1.594511389s to wait for apiserver process to appear ...
	I1107 17:17:31.727946  265599 api_server.go:87] waiting for apiserver healthz status ...
	I1107 17:17:31.727959  265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1107 17:17:34.924268  265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 17:17:34.924304  265599 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 17:17:35.424809  265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1107 17:17:35.429498  265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 17:17:35.429540  265599 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 17:17:35.925106  265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1107 17:17:35.931883  265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 17:17:35.931924  265599 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 17:17:36.424461  265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1107 17:17:36.430147  265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1107 17:17:36.437609  265599 api_server.go:140] control plane version: v1.25.3
	I1107 17:17:36.437636  265599 api_server.go:130] duration metric: took 4.709684273s to wait for apiserver health ...
	I1107 17:17:36.437645  265599 cni.go:95] Creating CNI manager for ""
	I1107 17:17:36.437652  265599 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 17:17:36.437659  265599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 17:17:36.447744  265599 system_pods.go:59] 6 kube-system pods found
	I1107 17:17:36.447788  265599 system_pods.go:61] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 17:17:36.447801  265599 system_pods.go:61] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 17:17:36.447812  265599 system_pods.go:61] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 17:17:36.447823  265599 system_pods.go:61] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 17:17:36.447833  265599 system_pods.go:61] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 17:17:36.447851  265599 system_pods.go:61] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
	I1107 17:17:36.447860  265599 system_pods.go:74] duration metric: took 10.195758ms to wait for pod list to return data ...
	I1107 17:17:36.447873  265599 node_conditions.go:102] verifying NodePressure condition ...
	I1107 17:17:36.452085  265599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 17:17:36.452127  265599 node_conditions.go:123] node cpu capacity is 8
	I1107 17:17:36.452142  265599 node_conditions.go:105] duration metric: took 4.263555ms to run NodePressure ...
	I1107 17:17:36.452169  265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:17:36.655569  265599 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1107 17:17:36.659806  265599 kubeadm.go:778] kubelet initialised
	I1107 17:17:36.659830  265599 kubeadm.go:779] duration metric: took 4.236781ms waiting for restarted kubelet to initialise ...
	I1107 17:17:36.659837  265599 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:36.664724  265599 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:38.678405  265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
	I1107 17:17:40.678711  265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
	I1107 17:17:42.751499  265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
	I1107 17:17:45.178920  265599 pod_ready.go:92] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:45.178953  265599 pod_ready.go:81] duration metric: took 8.514203128s waiting for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:45.178969  265599 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:47.190344  265599 pod_ready.go:92] pod "etcd-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:47.190385  265599 pod_ready.go:81] duration metric: took 2.011408194s waiting for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:47.190401  265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.703190  265599 pod_ready.go:92] pod "kube-apiserver-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:48.703227  265599 pod_ready.go:81] duration metric: took 1.512816405s waiting for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.703241  265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.708302  265599 pod_ready.go:92] pod "kube-controller-manager-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:48.708326  265599 pod_ready.go:81] duration metric: took 5.077395ms waiting for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.708335  265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.713353  265599 pod_ready.go:92] pod "kube-proxy-627q2" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:48.713373  265599 pod_ready.go:81] duration metric: took 5.032187ms waiting for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.713382  265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.718276  265599 pod_ready.go:92] pod "kube-scheduler-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:48.718298  265599 pod_ready.go:81] duration metric: took 4.909784ms waiting for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.718308  265599 pod_ready.go:38] duration metric: took 12.058462568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:48.718326  265599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 17:17:48.725688  265599 ops.go:34] apiserver oom_adj: -16
	I1107 17:17:48.725713  265599 kubeadm.go:631] restartCluster took 23.70983267s
	I1107 17:17:48.725723  265599 kubeadm.go:398] StartCluster complete in 23.739715552s
	I1107 17:17:48.725742  265599 settings.go:142] acquiring lock: {Name:mke91789b0d6e4070893f671805542745cc27d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:48.725827  265599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15310-3679/kubeconfig
	I1107 17:17:48.727240  265599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/kubeconfig: {Name:mk0b702cd34f333a37178f1520735cf3ce85aad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:48.728367  265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 17:17:48.731431  265599 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-171530" rescaled to 1
	I1107 17:17:48.731509  265599 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 17:17:48.735381  265599 out.go:177] * Verifying Kubernetes components...
	I1107 17:17:48.731563  265599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 17:17:48.731586  265599 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I1107 17:17:48.731727  265599 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:48.737019  265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:17:48.737075  265599 addons.go:65] Setting default-storageclass=true in profile "pause-171530"
	I1107 17:17:48.737103  265599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-171530"
	I1107 17:17:48.737073  265599 addons.go:65] Setting storage-provisioner=true in profile "pause-171530"
	I1107 17:17:48.737183  265599 addons.go:227] Setting addon storage-provisioner=true in "pause-171530"
	W1107 17:17:48.737191  265599 addons.go:236] addon storage-provisioner should already be in state true
	I1107 17:17:48.737247  265599 host.go:66] Checking if "pause-171530" exists ...
	I1107 17:17:48.737345  265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
	I1107 17:17:48.737690  265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
	I1107 17:17:48.748838  265599 node_ready.go:35] waiting up to 6m0s for node "pause-171530" to be "Ready" ...
	I1107 17:17:48.755501  265599 node_ready.go:49] node "pause-171530" has status "Ready":"True"
	I1107 17:17:48.755530  265599 node_ready.go:38] duration metric: took 6.650143ms waiting for node "pause-171530" to be "Ready" ...
	I1107 17:17:48.755544  265599 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:48.774070  265599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:17:48.776053  265599 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 17:17:48.776086  265599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 17:17:48.776141  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:48.780418  265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 17:17:48.783994  265599 addons.go:227] Setting addon default-storageclass=true in "pause-171530"
	W1107 17:17:48.784033  265599 addons.go:236] addon default-storageclass should already be in state true
	I1107 17:17:48.784066  265599 host.go:66] Checking if "pause-171530" exists ...
	I1107 17:17:48.784533  265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
	I1107 17:17:48.791755  265599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.827118  265599 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 17:17:48.827146  265599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 17:17:48.827202  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:48.832614  265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
	I1107 17:17:48.844192  265599 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 17:17:48.858350  265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
	I1107 17:17:48.935269  265599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 17:17:48.958923  265599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 17:17:49.187938  265599 pod_ready.go:92] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:49.187970  265599 pod_ready.go:81] duration metric: took 396.174585ms waiting for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.187985  265599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.588753  265599 pod_ready.go:92] pod "etcd-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:49.588785  265599 pod_ready.go:81] duration metric: took 400.791096ms waiting for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.588799  265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.758403  265599 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1107 17:17:49.760036  265599 addons.go:488] enableAddons completed in 1.028452371s
	I1107 17:17:49.988064  265599 pod_ready.go:92] pod "kube-apiserver-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:49.988085  265599 pod_ready.go:81] duration metric: took 399.27917ms waiting for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.988096  265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:50.387943  265599 pod_ready.go:92] pod "kube-controller-manager-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:50.387964  265599 pod_ready.go:81] duration metric: took 399.861996ms waiting for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:50.387975  265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:50.787240  265599 pod_ready.go:92] pod "kube-proxy-627q2" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:50.787266  265599 pod_ready.go:81] duration metric: took 399.283504ms waiting for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:50.787279  265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:51.187853  265599 pod_ready.go:92] pod "kube-scheduler-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:51.187885  265599 pod_ready.go:81] duration metric: took 400.597643ms waiting for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:51.187896  265599 pod_ready.go:38] duration metric: took 2.432339677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:51.187921  265599 api_server.go:51] waiting for apiserver process to appear ...
	I1107 17:17:51.187970  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:17:51.198604  265599 api_server.go:71] duration metric: took 2.467050632s to wait for apiserver process to appear ...
	I1107 17:17:51.198640  265599 api_server.go:87] waiting for apiserver healthz status ...
	I1107 17:17:51.198650  265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1107 17:17:51.203228  265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1107 17:17:51.204215  265599 api_server.go:140] control plane version: v1.25.3
	I1107 17:17:51.204244  265599 api_server.go:130] duration metric: took 5.597242ms to wait for apiserver health ...
	I1107 17:17:51.204255  265599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 17:17:51.389884  265599 system_pods.go:59] 7 kube-system pods found
	I1107 17:17:51.389918  265599 system_pods.go:61] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running
	I1107 17:17:51.389923  265599 system_pods.go:61] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running
	I1107 17:17:51.389927  265599 system_pods.go:61] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running
	I1107 17:17:51.389932  265599 system_pods.go:61] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running
	I1107 17:17:51.389936  265599 system_pods.go:61] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running
	I1107 17:17:51.389940  265599 system_pods.go:61] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
	I1107 17:17:51.389944  265599 system_pods.go:61] "storage-provisioner" [225d8eea-c00a-46a3-8b89-abb34458db76] Running
	I1107 17:17:51.389949  265599 system_pods.go:74] duration metric: took 185.688763ms to wait for pod list to return data ...
	I1107 17:17:51.389958  265599 default_sa.go:34] waiting for default service account to be created ...
	I1107 17:17:51.587856  265599 default_sa.go:45] found service account: "default"
	I1107 17:17:51.587885  265599 default_sa.go:55] duration metric: took 197.921282ms for default service account to be created ...
	I1107 17:17:51.587896  265599 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 17:17:51.791610  265599 system_pods.go:86] 7 kube-system pods found
	I1107 17:17:51.791656  265599 system_pods.go:89] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running
	I1107 17:17:51.791666  265599 system_pods.go:89] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running
	I1107 17:17:51.791683  265599 system_pods.go:89] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running
	I1107 17:17:51.791692  265599 system_pods.go:89] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running
	I1107 17:17:51.791699  265599 system_pods.go:89] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running
	I1107 17:17:51.791707  265599 system_pods.go:89] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
	I1107 17:17:51.791717  265599 system_pods.go:89] "storage-provisioner" [225d8eea-c00a-46a3-8b89-abb34458db76] Running
	I1107 17:17:51.791725  265599 system_pods.go:126] duration metric: took 203.823982ms to wait for k8s-apps to be running ...
	I1107 17:17:51.791734  265599 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 17:17:51.791785  265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:17:51.802112  265599 system_svc.go:56] duration metric: took 10.369415ms WaitForService to wait for kubelet.
	I1107 17:17:51.802147  265599 kubeadm.go:573] duration metric: took 3.070599627s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 17:17:51.802170  265599 node_conditions.go:102] verifying NodePressure condition ...
	I1107 17:17:51.987329  265599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 17:17:51.987365  265599 node_conditions.go:123] node cpu capacity is 8
	I1107 17:17:51.987379  265599 node_conditions.go:105] duration metric: took 185.202183ms to run NodePressure ...
	I1107 17:17:51.987392  265599 start.go:217] waiting for startup goroutines ...
	I1107 17:17:51.987763  265599 ssh_runner.go:195] Run: rm -f paused
	I1107 17:17:52.043023  265599 start.go:506] kubectl: 1.25.3, cluster: 1.25.3 (minor skew: 0)
	I1107 17:17:52.045707  265599 out.go:177] * Done! kubectl is now configured to use "pause-171530" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-171530
helpers_test.go:235: (dbg) docker inspect pause-171530:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550",
	        "Created": "2022-11-07T17:15:38.935447727Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 241803,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:15:39.387509554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/hostname",
	        "HostsPath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/hosts",
	        "LogPath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550-json.log",
	        "Name": "/pause-171530",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-171530:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-171530",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886-init/diff:/var/lib/docker/overlay2/2fd1fc00a589bf61b81b15f5596b1c421509b0ed94a0073de8df35851e0104fd/diff:/var/lib/docker/overlay2/ca94f1e5c7c58ab040213044ce029a51c1ea19ec2ae58d30e36b7c461dac5b75/diff:/var/lib/docker/overlay2/e42a9a60bb0ccca9f6ebc3bec24f638bafba48d604bd99af2d43cee1225c9466/diff:/var/lib/docker/overlay2/3474eef000daf16045ddcd082155e02d3adc432e026d93a79f6650da6b7bbe2c/diff:/var/lib/docker/overlay2/2c37502622a619527bab9f0e94b3c9e8ea823ff6ffdc84760dfeca0a7a1d2ba9/diff:/var/lib/docker/overlay2/c89ceddb787dc6015274fbee4e47c019bcb7637c523d5d053aafccc75f2d8c5b/diff:/var/lib/docker/overlay2/d13aa31ebe50e77225149ff2f5361d34b4b4dcbeb3b0bc0a15e35f3d4a8b7756/diff:/var/lib/docker/overlay2/c95f6f4ff58fc27002c40206891dabcbf4ed1b39c8f3584432f15b72a15920c1/diff:/var/lib/docker/overlay2/609367ca657fad1a480fd0d0075ab9d34c5556928b3f753bf75b7937a8b74ee8/diff:/var/lib/docker/overlay2/02a742
81aea9f2e787ac6f6c4ac9f7d01ae11e33439e4787dff010ca49918d6b/diff:/var/lib/docker/overlay2/97be1349403116decda81fc5f089a2db445d4c5a72b26e4fa1d2d69bc8f5b867/diff:/var/lib/docker/overlay2/0a0a5163f70151b385895e742fd238ec8e8e4f76def9c619677619db2a6d5b08/diff:/var/lib/docker/overlay2/5659ee0023498bf40cbbec8f9a2f0fddfc95419655c96d6605a451a2c46c6036/diff:/var/lib/docker/overlay2/490c47e44446d2723d18ba6ae67ce415128dbc5fd055c8b0c3af734b0a072691/diff:/var/lib/docker/overlay2/303dd4de2e78ffebe2a8b0327ff89f434f0d94efec1239397b26f584669c6688/diff:/var/lib/docker/overlay2/57cd5e60d0e6efc4eba5b1d3312be411722b2dbe779b38d7e29451cb53536ed6/diff:/var/lib/docker/overlay2/ebe05a325862fb9343e31e938f8b0cbebb9eac74b601c1cbd7c51d82932d20b4/diff:/var/lib/docker/overlay2/8536312e6228bdf272e430339824f16762dc9bb32d3fbcd5a2704ed1cbd37e64/diff:/var/lib/docker/overlay2/2598be8b2bb739fc75e87aee71f5af665456fffb16f599676335c74f15ae6391/diff:/var/lib/docker/overlay2/4d2d35e9d340ea3932b4095e279f70853bcd0793bb323921891c0c769627f2c5/diff:/var/lib/d
ocker/overlay2/4d826174051f4f89d8c7f9e2a1c0deeedf4fe1375b7e4805b1507830dfcb85eb/diff:/var/lib/docker/overlay2/04619ad2580acc4047033104b728374c0bcab41b326af981fd92107ded6f8715/diff:/var/lib/docker/overlay2/653c7b7d9b3ff747507ce6d4c8750195142e3c1e5dd8776d1f5ad68da192b0c3/diff:/var/lib/docker/overlay2/7feba1b41892a093a69f3006a5955540f607a8c16986fd594da627470dc20b50/diff:/var/lib/docker/overlay2/edfa060eb3735b8c7368bfa84da65c47f0381d016fcb1f23338cbe984ffb4309/diff:/var/lib/docker/overlay2/7bc7096889faa87a4f3542932b25941d0cb3ebdca2eb7a8323c0b437c946ca84/diff:/var/lib/docker/overlay2/6d9c19e156f90bc4ce093d160661251be6f95a51a9e0712f2a79c6a08cd996cd/diff:/var/lib/docker/overlay2/f5ba9cd7997e8cdfc6fb27c76c069767b07cc8201e7e0ef7c1a3ffa443525fb1/diff:/var/lib/docker/overlay2/43277eab35f847188e2fbacd196549314d6463948690b6eb7218cfe6ecc19b17/diff:/var/lib/docker/overlay2/ef090d552b4022f86d7bdf79bbc298e347a3e535c804f65b2d33683e0864901d/diff:/var/lib/docker/overlay2/8ef9f5644e2d99ddd144a8c44988dff320901634fa10fdd2ceb63b44464
942d2/diff:/var/lib/docker/overlay2/8db604496435b1f4a13ceca647b7f365eccc2122c46c001b46d3343020dce882/diff:/var/lib/docker/overlay2/aa63ff25f14d23e22d30a5f6ffdca4dc610d3a56fda7fcf8128955229e8179ac/diff:/var/lib/docker/overlay2/d8e836f399115dec3f57c3bdae8cfe9459ca00fb4db1619f7c32a54c17f2696a/diff:/var/lib/docker/overlay2/e8706f9f543307c51f76840c008a49519273628b367c558c81472382319ee067/diff:/var/lib/docker/overlay2/410562df42124ab024d1aed6c452424839223794de2fac149e33e3a2aaad7db5/diff:/var/lib/docker/overlay2/24ba0b84d34cf83f31c6e6420465d970cd940052bc918b875c8320dfbeccb3fc/diff:/var/lib/docker/overlay2/cfd31a3b8ba33133312104bac0d05c9334975dd18cb3dfff6ba901668d8935cb/diff:/var/lib/docker/overlay2/2bfc0a7a2746e54d77a9a1838e077ca17b8bd024966ed7fc7f4cfceffc1e41c9/diff:/var/lib/docker/overlay2/67ae264c7fe2b9c7f659d1bbdccdc178c34230e3b6aa815b7f3ff24d50f1ca5a/diff:/var/lib/docker/overlay2/2f921d0a0caaca67918401f3f9b193c0e89b931f174e447a79ba82b2a5743c6e/diff:/var/lib/docker/overlay2/8f6f97c7885b0f2745adf21261ead041f0b7ce
88d0ab325cfafd1cf3b9aa07f3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886/merged",
	                "UpperDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886/diff",
	                "WorkDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-171530",
	                "Source": "/var/lib/docker/volumes/pause-171530/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-171530",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-171530",
	                "name.minikube.sigs.k8s.io": "pause-171530",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9adb1a46308a44769722d4564542b00b60699767153f3cfdcf9adf8a13796ed",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49369"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49368"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49365"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49367"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49366"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a9adb1a46308",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-171530": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e3da15937387",
	                        "pause-171530"
	                    ],
	                    "NetworkID": "39ab6118a516dd29e38bb2d528840c29808f0aaff829c163fb133591392f975d",
	                    "EndpointID": "f05b8ecc16b4a46e2d24102363dbe97c03cc31d021c5d068a263b87ac53329f9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-171530 -n pause-171530
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-171530 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-171530 logs -n 25: (1.37022974s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | cert-options-171318 ssh               | cert-options-171318       | jenkins | v1.28.0 | 07 Nov 22 17:13 UTC | 07 Nov 22 17:13 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-171318 -- sudo        | cert-options-171318       | jenkins | v1.28.0 | 07 Nov 22 17:13 UTC | 07 Nov 22 17:13 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-171318                | cert-options-171318       | jenkins | v1.28.0 | 07 Nov 22 17:13 UTC | 07 Nov 22 17:13 UTC |
	| ssh     | docker-flags-171335 ssh               | docker-flags-171335       | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-171335 ssh               | docker-flags-171335       | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-171335                | docker-flags-171335       | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
	| start   | -p kubernetes-upgrade-171418          | kubernetes-upgrade-171418 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-171351             | missing-upgrade-171351    | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:15 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-171343             | stopped-upgrade-171343    | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:15 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-171418          | kubernetes-upgrade-171418 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:15 UTC |
	| delete  | -p stopped-upgrade-171343             | stopped-upgrade-171343    | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:15 UTC |
	| start   | -p kubernetes-upgrade-171418          | kubernetes-upgrade-171418 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-171351             | missing-upgrade-171351    | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:15 UTC |
	| start   | -p pause-171530 --memory=2048         | pause-171530              | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:17 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker            |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p cert-expiration-171219             | cert-expiration-171219    | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:16 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p running-upgrade-171507             | running-upgrade-171507    | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-171507             | running-upgrade-171507    | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:16 UTC |
	| start   | -p auto-171300 --memory=2048          | auto-171300               | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:17 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m         |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-171219             | cert-expiration-171219    | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:16 UTC |
	| start   | -p kindnet-171300                     | kindnet-171300            | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m         |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker         |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p pause-171530                       | pause-171530              | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171300 pgrep -a            | kindnet-171300            | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-171300                     | kindnet-171300            | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
	| start   | -p cilium-171301 --memory=2048        | cilium-171301             | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m         |                           |         |         |                     |                     |
	|         | --cni=cilium --driver=docker          |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | -p auto-171300 pgrep -a               | auto-171300               | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 17:17:39
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 17:17:39.909782  273963 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:17:39.909910  273963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:17:39.909920  273963 out.go:309] Setting ErrFile to fd 2...
	I1107 17:17:39.909925  273963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:17:39.910036  273963 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-3679/.minikube/bin
	I1107 17:17:39.910611  273963 out.go:303] Setting JSON to false
	I1107 17:17:39.912756  273963 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3611,"bootTime":1667837849,"procs":1171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 17:17:39.912825  273963 start.go:126] virtualization: kvm guest
	I1107 17:17:39.916343  273963 out.go:177] * [cilium-171301] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 17:17:39.918167  273963 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 17:17:39.918122  273963 notify.go:220] Checking for updates...
	I1107 17:17:39.919930  273963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 17:17:39.921709  273963 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	I1107 17:17:39.923329  273963 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	I1107 17:17:39.924851  273963 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 17:17:39.927024  273963 config.go:180] Loaded profile config "auto-171300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:39.927142  273963 config.go:180] Loaded profile config "kubernetes-upgrade-171418": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:39.927235  273963 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:39.927287  273963 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 17:17:39.959963  273963 docker.go:137] docker version: linux-20.10.21
	I1107 17:17:39.960043  273963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:17:40.066046  273963 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-07 17:17:39.981648038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:17:40.066199  273963 docker.go:254] overlay module found
	I1107 17:17:40.069246  273963 out.go:177] * Using the docker driver based on user configuration
	I1107 17:17:40.070821  273963 start.go:282] selected driver: docker
	I1107 17:17:40.070848  273963 start.go:808] validating driver "docker" against <nil>
	I1107 17:17:40.070871  273963 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 17:17:40.072076  273963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:17:40.184024  273963 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-07 17:17:40.095572549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:17:40.184162  273963 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 17:17:40.184327  273963 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 17:17:40.186905  273963 out.go:177] * Using Docker driver with root privileges
	I1107 17:17:40.188888  273963 cni.go:95] Creating CNI manager for "cilium"
	I1107 17:17:40.188919  273963 start_flags.go:312] Found "Cilium" CNI - setting NetworkPlugin=cni
	I1107 17:17:40.188929  273963 start_flags.go:317] config:
	{Name:cilium-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:
cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:17:40.191042  273963 out.go:177] * Starting control plane node cilium-171301 in cluster cilium-171301
	I1107 17:17:40.192756  273963 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 17:17:40.194622  273963 out.go:177] * Pulling base image ...
	I1107 17:17:40.196366  273963 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 17:17:40.196424  273963 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 17:17:40.196439  273963 cache.go:57] Caching tarball of preloaded images
	I1107 17:17:40.196478  273963 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 17:17:40.196755  273963 preload.go:174] Found /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 17:17:40.196770  273963 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 17:17:40.196994  273963 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/config.json ...
	I1107 17:17:40.197037  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/config.json: {Name:mke8d5318de654621f86e157b3b792411142e89b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:40.226030  273963 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 17:17:40.226064  273963 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 17:17:40.226085  273963 cache.go:208] Successfully downloaded all kic artifacts
	I1107 17:17:40.226119  273963 start.go:364] acquiring machines lock for cilium-171301: {Name:mk73a4f694f74dc8530831944bb92040f98c814b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 17:17:40.226272  273963 start.go:368] acquired machines lock for "cilium-171301" in 128.513µs
	I1107 17:17:40.226338  273963 start.go:93] Provisioning new machine with config: &{Name:cilium-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 17:17:40.226851  273963 start.go:125] createHost starting for "" (driver="docker")
	I1107 17:17:35.925106  265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1107 17:17:35.931883  265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 17:17:35.931924  265599 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 17:17:36.424461  265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1107 17:17:36.430147  265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1107 17:17:36.437609  265599 api_server.go:140] control plane version: v1.25.3
	I1107 17:17:36.437636  265599 api_server.go:130] duration metric: took 4.709684273s to wait for apiserver health ...
	I1107 17:17:36.437645  265599 cni.go:95] Creating CNI manager for ""
	I1107 17:17:36.437652  265599 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 17:17:36.437659  265599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 17:17:36.447744  265599 system_pods.go:59] 6 kube-system pods found
	I1107 17:17:36.447788  265599 system_pods.go:61] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 17:17:36.447801  265599 system_pods.go:61] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 17:17:36.447812  265599 system_pods.go:61] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 17:17:36.447823  265599 system_pods.go:61] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 17:17:36.447833  265599 system_pods.go:61] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 17:17:36.447851  265599 system_pods.go:61] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
	I1107 17:17:36.447860  265599 system_pods.go:74] duration metric: took 10.195758ms to wait for pod list to return data ...
	I1107 17:17:36.447873  265599 node_conditions.go:102] verifying NodePressure condition ...
	I1107 17:17:36.452085  265599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 17:17:36.452127  265599 node_conditions.go:123] node cpu capacity is 8
	I1107 17:17:36.452142  265599 node_conditions.go:105] duration metric: took 4.263555ms to run NodePressure ...
	I1107 17:17:36.452169  265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:17:36.655569  265599 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1107 17:17:36.659806  265599 kubeadm.go:778] kubelet initialised
	I1107 17:17:36.659830  265599 kubeadm.go:779] duration metric: took 4.236781ms waiting for restarted kubelet to initialise ...
	I1107 17:17:36.659837  265599 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:36.664724  265599 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:38.678405  265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
	I1107 17:17:39.764430  254808 pod_ready.go:92] pod "coredns-565d847f94-zscpb" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:39.764470  254808 pod_ready.go:81] duration metric: took 37.51089729s waiting for pod "coredns-565d847f94-zscpb" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.764489  254808 pod_ready.go:78] waiting up to 5m0s for pod "etcd-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.769704  254808 pod_ready.go:92] pod "etcd-auto-171300" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:39.769729  254808 pod_ready.go:81] duration metric: took 5.228844ms waiting for pod "etcd-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.769741  254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.774830  254808 pod_ready.go:92] pod "kube-apiserver-auto-171300" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:39.774850  254808 pod_ready.go:81] duration metric: took 5.101563ms waiting for pod "kube-apiserver-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.774863  254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.779742  254808 pod_ready.go:92] pod "kube-controller-manager-auto-171300" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:39.779767  254808 pod_ready.go:81] duration metric: took 4.895957ms waiting for pod "kube-controller-manager-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.779780  254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-5hjzb" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.787718  254808 pod_ready.go:92] pod "kube-proxy-5hjzb" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:39.787745  254808 pod_ready.go:81] duration metric: took 7.956771ms waiting for pod "kube-proxy-5hjzb" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.787759  254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:40.161780  254808 pod_ready.go:92] pod "kube-scheduler-auto-171300" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:40.161804  254808 pod_ready.go:81] duration metric: took 374.038459ms waiting for pod "kube-scheduler-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:40.161812  254808 pod_ready.go:38] duration metric: took 39.930959656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:40.161836  254808 api_server.go:51] waiting for apiserver process to appear ...
	I1107 17:17:40.161880  254808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:17:40.174326  254808 api_server.go:71] duration metric: took 40.098096653s to wait for apiserver process to appear ...
	I1107 17:17:40.174356  254808 api_server.go:87] waiting for apiserver healthz status ...
	I1107 17:17:40.174385  254808 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1107 17:17:40.180459  254808 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1107 17:17:40.181698  254808 api_server.go:140] control plane version: v1.25.3
	I1107 17:17:40.181729  254808 api_server.go:130] duration metric: took 7.366556ms to wait for apiserver health ...
	I1107 17:17:40.181739  254808 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 17:17:40.365251  254808 system_pods.go:59] 7 kube-system pods found
	I1107 17:17:40.365291  254808 system_pods.go:61] "coredns-565d847f94-zscpb" [a8e008dc-4166-4449-8182-2d5998d7e35a] Running
	I1107 17:17:40.365298  254808 system_pods.go:61] "etcd-auto-171300" [b26c6dee-c57a-4455-bf34-57e8d4bdae28] Running
	I1107 17:17:40.365305  254808 system_pods.go:61] "kube-apiserver-auto-171300" [9702725f-76a4-4828-ba51-3bd1bd31c921] Running
	I1107 17:17:40.365313  254808 system_pods.go:61] "kube-controller-manager-auto-171300" [a2722655-640b-4f80-8ecc-0cb3abbc73e1] Running
	I1107 17:17:40.365320  254808 system_pods.go:61] "kube-proxy-5hjzb" [e3111b6a-3730-47f4-b80e-fa872011b18d] Running
	I1107 17:17:40.365326  254808 system_pods.go:61] "kube-scheduler-auto-171300" [49b194d9-1c66-4db1-964c-72958b48a969] Running
	I1107 17:17:40.365341  254808 system_pods.go:61] "storage-provisioner" [af36ca23-ffa5-4472-b090-7e646b93034c] Running
	I1107 17:17:40.365353  254808 system_pods.go:74] duration metric: took 183.607113ms to wait for pod list to return data ...
	I1107 17:17:40.365368  254808 default_sa.go:34] waiting for default service account to be created ...
	I1107 17:17:40.561571  254808 default_sa.go:45] found service account: "default"
	I1107 17:17:40.561596  254808 default_sa.go:55] duration metric: took 196.218934ms for default service account to be created ...
	I1107 17:17:40.561604  254808 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 17:17:40.765129  254808 system_pods.go:86] 7 kube-system pods found
	I1107 17:17:40.765166  254808 system_pods.go:89] "coredns-565d847f94-zscpb" [a8e008dc-4166-4449-8182-2d5998d7e35a] Running
	I1107 17:17:40.765200  254808 system_pods.go:89] "etcd-auto-171300" [b26c6dee-c57a-4455-bf34-57e8d4bdae28] Running
	I1107 17:17:40.765210  254808 system_pods.go:89] "kube-apiserver-auto-171300" [9702725f-76a4-4828-ba51-3bd1bd31c921] Running
	I1107 17:17:40.765218  254808 system_pods.go:89] "kube-controller-manager-auto-171300" [a2722655-640b-4f80-8ecc-0cb3abbc73e1] Running
	I1107 17:17:40.765225  254808 system_pods.go:89] "kube-proxy-5hjzb" [e3111b6a-3730-47f4-b80e-fa872011b18d] Running
	I1107 17:17:40.765231  254808 system_pods.go:89] "kube-scheduler-auto-171300" [49b194d9-1c66-4db1-964c-72958b48a969] Running
	I1107 17:17:40.765237  254808 system_pods.go:89] "storage-provisioner" [af36ca23-ffa5-4472-b090-7e646b93034c] Running
	I1107 17:17:40.765245  254808 system_pods.go:126] duration metric: took 203.635578ms to wait for k8s-apps to be running ...
	I1107 17:17:40.765255  254808 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 17:17:40.765298  254808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:17:40.776269  254808 system_svc.go:56] duration metric: took 11.004445ms WaitForService to wait for kubelet.
	I1107 17:17:40.776304  254808 kubeadm.go:573] duration metric: took 40.700080633s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 17:17:40.776325  254808 node_conditions.go:102] verifying NodePressure condition ...
	I1107 17:17:40.962904  254808 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 17:17:40.962940  254808 node_conditions.go:123] node cpu capacity is 8
	I1107 17:17:40.962955  254808 node_conditions.go:105] duration metric: took 186.624576ms to run NodePressure ...
	I1107 17:17:40.962972  254808 start.go:217] waiting for startup goroutines ...
	I1107 17:17:40.963411  254808 ssh_runner.go:195] Run: rm -f paused
	I1107 17:17:41.016064  254808 start.go:506] kubectl: 1.25.3, cluster: 1.25.3 (minor skew: 0)
	I1107 17:17:41.019135  254808 out.go:177] * Done! kubectl is now configured to use "auto-171300" cluster and "default" namespace by default
	I1107 17:17:38.938491  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 17:17:38.966502  233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
	I1107 17:17:38.966589  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 17:17:38.992316  233006 logs.go:274] 1 containers: [6fec17665e36]
	I1107 17:17:38.992406  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 17:17:39.018933  233006 logs.go:274] 0 containers: []
	W1107 17:17:39.018962  233006 logs.go:276] No container was found matching "coredns"
	I1107 17:17:39.019012  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 17:17:39.046418  233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
	I1107 17:17:39.046497  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 17:17:39.072173  233006 logs.go:274] 0 containers: []
	W1107 17:17:39.072208  233006 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:17:39.072257  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 17:17:39.098237  233006 logs.go:274] 0 containers: []
	W1107 17:17:39.098266  233006 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:17:39.098309  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 17:17:39.124960  233006 logs.go:274] 0 containers: []
	W1107 17:17:39.124989  233006 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:17:39.125038  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 17:17:39.153502  233006 logs.go:274] 3 containers: [8891a1b14e04 1c2c98a4c31a 371287b3c0c6]
	I1107 17:17:39.153554  233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
	I1107 17:17:39.153570  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
	I1107 17:17:39.193713  233006 logs.go:123] Gathering logs for kube-controller-manager [1c2c98a4c31a] ...
	I1107 17:17:39.193770  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2c98a4c31a"
	I1107 17:17:39.222940  233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
	I1107 17:17:39.222968  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
	I1107 17:17:39.264980  233006 logs.go:123] Gathering logs for Docker ...
	I1107 17:17:39.265019  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 17:17:39.306266  233006 logs.go:123] Gathering logs for kubelet ...
	I1107 17:17:39.306303  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 17:17:39.375563  233006 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:17:39.375608  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:17:39.446970  233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:17:39.446997  233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
	I1107 17:17:39.447010  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
	I1107 17:17:39.478856  233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
	I1107 17:17:39.478893  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
	I1107 17:17:39.551509  233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
	I1107 17:17:39.551552  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
	I1107 17:17:39.588201  233006 logs.go:123] Gathering logs for container status ...
	I1107 17:17:39.588235  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:17:39.622485  233006 logs.go:123] Gathering logs for dmesg ...
	I1107 17:17:39.622531  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:17:39.711503  233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
	I1107 17:17:39.711531  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
	I1107 17:17:39.746571  233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
	I1107 17:17:39.746605  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
	I1107 17:17:42.339399  233006 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1107 17:17:42.339827  233006 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1107 17:17:42.439058  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 17:17:42.465860  233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
	I1107 17:17:42.465945  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 17:17:42.503349  233006 logs.go:274] 1 containers: [6fec17665e36]
	I1107 17:17:42.503419  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 17:17:42.529180  233006 logs.go:274] 0 containers: []
	W1107 17:17:42.529209  233006 logs.go:276] No container was found matching "coredns"
	I1107 17:17:42.529272  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 17:17:42.556348  233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
	I1107 17:17:42.556424  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 17:17:42.585423  233006 logs.go:274] 0 containers: []
	W1107 17:17:42.585457  233006 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:17:42.585514  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 17:17:42.612694  233006 logs.go:274] 0 containers: []
	W1107 17:17:42.612730  233006 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:17:42.612806  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 17:17:42.638513  233006 logs.go:274] 0 containers: []
	W1107 17:17:42.638534  233006 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:17:42.638584  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 17:17:42.666063  233006 logs.go:274] 2 containers: [8891a1b14e04 371287b3c0c6]
	I1107 17:17:42.666121  233006 logs.go:123] Gathering logs for dmesg ...
	I1107 17:17:42.666139  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:17:42.683133  233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
	I1107 17:17:42.683163  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
	I1107 17:17:42.718461  233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
	I1107 17:17:42.718496  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
	I1107 17:17:42.752314  233006 logs.go:123] Gathering logs for Docker ...
	I1107 17:17:42.752340  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 17:17:42.774285  233006 logs.go:123] Gathering logs for container status ...
	I1107 17:17:42.774322  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:17:42.808596  233006 logs.go:123] Gathering logs for kubelet ...
	I1107 17:17:42.808627  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 17:17:42.886659  233006 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:17:42.886698  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:17:42.960618  233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:17:42.960656  233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
	I1107 17:17:42.960670  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
	I1107 17:17:43.002805  233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
	I1107 17:17:43.002858  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
	I1107 17:17:43.082429  233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
	I1107 17:17:43.082467  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
	I1107 17:17:43.115843  233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
	I1107 17:17:43.115911  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
	I1107 17:17:43.190735  233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
	I1107 17:17:43.190775  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
	I1107 17:17:40.229568  273963 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 17:17:40.229875  273963 start.go:159] libmachine.API.Create for "cilium-171301" (driver="docker")
	I1107 17:17:40.229916  273963 client.go:168] LocalClient.Create starting
	I1107 17:17:40.230045  273963 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem
	I1107 17:17:40.230090  273963 main.go:134] libmachine: Decoding PEM data...
	I1107 17:17:40.230115  273963 main.go:134] libmachine: Parsing certificate...
	I1107 17:17:40.230183  273963 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem
	I1107 17:17:40.230204  273963 main.go:134] libmachine: Decoding PEM data...
	I1107 17:17:40.230217  273963 main.go:134] libmachine: Parsing certificate...
	I1107 17:17:40.230581  273963 cli_runner.go:164] Run: docker network inspect cilium-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 17:17:40.255766  273963 cli_runner.go:211] docker network inspect cilium-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 17:17:40.255850  273963 network_create.go:272] running [docker network inspect cilium-171301] to gather additional debugging logs...
	I1107 17:17:40.255875  273963 cli_runner.go:164] Run: docker network inspect cilium-171301
	W1107 17:17:40.279408  273963 cli_runner.go:211] docker network inspect cilium-171301 returned with exit code 1
	I1107 17:17:40.279440  273963 network_create.go:275] error running [docker network inspect cilium-171301]: docker network inspect cilium-171301: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-171301
	I1107 17:17:40.279451  273963 network_create.go:277] output of [docker network inspect cilium-171301]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-171301
	
	** /stderr **
	I1107 17:17:40.279494  273963 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 17:17:40.309079  273963 network.go:246] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-aa8bc6b4377d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f9:4a:a0:7f}}
	I1107 17:17:40.309777  273963 network.go:246] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-46185e74412a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:46:c3:83:d6}}
	I1107 17:17:40.310466  273963 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0004bc5f8] misses:0}
	I1107 17:17:40.310501  273963 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 17:17:40.310513  273963 network_create.go:115] attempt to create docker network cilium-171301 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1107 17:17:40.310578  273963 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-171301 cilium-171301
	I1107 17:17:40.390589  273963 network_create.go:99] docker network cilium-171301 192.168.67.0/24 created
	I1107 17:17:40.390635  273963 kic.go:106] calculated static IP "192.168.67.2" for the "cilium-171301" container
	I1107 17:17:40.390704  273963 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 17:17:40.426276  273963 cli_runner.go:164] Run: docker volume create cilium-171301 --label name.minikube.sigs.k8s.io=cilium-171301 --label created_by.minikube.sigs.k8s.io=true
	I1107 17:17:40.452601  273963 oci.go:103] Successfully created a docker volume cilium-171301
	I1107 17:17:40.452735  273963 cli_runner.go:164] Run: docker run --rm --name cilium-171301-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-171301 --entrypoint /usr/bin/test -v cilium-171301:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1107 17:17:41.261517  273963 oci.go:107] Successfully prepared a docker volume cilium-171301
	I1107 17:17:41.261565  273963 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 17:17:41.261584  273963 kic.go:179] Starting extracting preloaded images to volume ...
	I1107 17:17:41.261639  273963 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-171301:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 17:17:44.552998  273963 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-171301:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (3.291298492s)
	I1107 17:17:44.553029  273963 kic.go:188] duration metric: took 3.291442 seconds to extract preloaded images to volume
	W1107 17:17:44.553206  273963 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 17:17:44.553333  273963 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 17:17:44.659014  273963 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-171301 --name cilium-171301 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-171301 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-171301 --network cilium-171301 --ip 192.168.67.2 --volume cilium-171301:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1107 17:17:40.678711  265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
	I1107 17:17:42.751499  265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
	I1107 17:17:45.178920  265599 pod_ready.go:92] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:45.178953  265599 pod_ready.go:81] duration metric: took 8.514203128s waiting for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:45.178969  265599 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:47.190344  265599 pod_ready.go:92] pod "etcd-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:47.190385  265599 pod_ready.go:81] duration metric: took 2.011408194s waiting for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:47.190401  265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.703190  265599 pod_ready.go:92] pod "kube-apiserver-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:48.703227  265599 pod_ready.go:81] duration metric: took 1.512816405s waiting for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.703241  265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.708302  265599 pod_ready.go:92] pod "kube-controller-manager-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:48.708326  265599 pod_ready.go:81] duration metric: took 5.077395ms waiting for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.708335  265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.713353  265599 pod_ready.go:92] pod "kube-proxy-627q2" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:48.713373  265599 pod_ready.go:81] duration metric: took 5.032187ms waiting for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.713382  265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.718276  265599 pod_ready.go:92] pod "kube-scheduler-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:48.718298  265599 pod_ready.go:81] duration metric: took 4.909784ms waiting for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.718308  265599 pod_ready.go:38] duration metric: took 12.058462568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:48.718326  265599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 17:17:48.725688  265599 ops.go:34] apiserver oom_adj: -16
	I1107 17:17:48.725713  265599 kubeadm.go:631] restartCluster took 23.70983267s
	I1107 17:17:48.725723  265599 kubeadm.go:398] StartCluster complete in 23.739715552s
	I1107 17:17:48.725742  265599 settings.go:142] acquiring lock: {Name:mke91789b0d6e4070893f671805542745cc27d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:48.725827  265599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15310-3679/kubeconfig
	I1107 17:17:48.727240  265599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/kubeconfig: {Name:mk0b702cd34f333a37178f1520735cf3ce85aad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:48.728367  265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 17:17:48.731431  265599 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-171530" rescaled to 1
	I1107 17:17:48.731509  265599 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 17:17:48.735381  265599 out.go:177] * Verifying Kubernetes components...
	I1107 17:17:45.728936  233006 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1107 17:17:45.729307  233006 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1107 17:17:45.938905  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 17:17:45.968231  233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
	I1107 17:17:45.968310  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 17:17:45.995241  233006 logs.go:274] 1 containers: [6fec17665e36]
	I1107 17:17:45.995316  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 17:17:46.024313  233006 logs.go:274] 0 containers: []
	W1107 17:17:46.024343  233006 logs.go:276] No container was found matching "coredns"
	I1107 17:17:46.024394  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 17:17:46.054216  233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
	I1107 17:17:46.054293  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 17:17:46.088627  233006 logs.go:274] 0 containers: []
	W1107 17:17:46.088662  233006 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:17:46.088710  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 17:17:46.116330  233006 logs.go:274] 0 containers: []
	W1107 17:17:46.116365  233006 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:17:46.116420  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 17:17:46.150637  233006 logs.go:274] 0 containers: []
	W1107 17:17:46.150668  233006 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:17:46.150771  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 17:17:46.182148  233006 logs.go:274] 2 containers: [8891a1b14e04 371287b3c0c6]
	I1107 17:17:46.182207  233006 logs.go:123] Gathering logs for dmesg ...
	I1107 17:17:46.182221  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:17:46.204275  233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
	I1107 17:17:46.204315  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
	I1107 17:17:46.244475  233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
	I1107 17:17:46.244515  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
	I1107 17:17:46.337500  233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
	I1107 17:17:46.337547  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
	I1107 17:17:46.384737  233006 logs.go:123] Gathering logs for Docker ...
	I1107 17:17:46.384774  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 17:17:46.405735  233006 logs.go:123] Gathering logs for container status ...
	I1107 17:17:46.405772  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:17:46.443740  233006 logs.go:123] Gathering logs for kubelet ...
	I1107 17:17:46.443780  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 17:17:46.515276  233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
	I1107 17:17:46.515311  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
	I1107 17:17:46.550260  233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
	I1107 17:17:46.550314  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
	I1107 17:17:46.632884  233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
	I1107 17:17:46.632921  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
	I1107 17:17:46.667751  233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
	I1107 17:17:46.667787  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
	I1107 17:17:46.701085  233006 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:17:46.701121  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:17:46.780102  233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:17:48.731563  265599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 17:17:48.731586  265599 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I1107 17:17:48.731727  265599 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:48.737019  265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:17:48.737075  265599 addons.go:65] Setting default-storageclass=true in profile "pause-171530"
	I1107 17:17:48.737103  265599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-171530"
	I1107 17:17:48.737073  265599 addons.go:65] Setting storage-provisioner=true in profile "pause-171530"
	I1107 17:17:48.737183  265599 addons.go:227] Setting addon storage-provisioner=true in "pause-171530"
	W1107 17:17:48.737191  265599 addons.go:236] addon storage-provisioner should already be in state true
	I1107 17:17:48.737247  265599 host.go:66] Checking if "pause-171530" exists ...
	I1107 17:17:48.737345  265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
	I1107 17:17:48.737690  265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
	I1107 17:17:48.748838  265599 node_ready.go:35] waiting up to 6m0s for node "pause-171530" to be "Ready" ...
	I1107 17:17:48.755501  265599 node_ready.go:49] node "pause-171530" has status "Ready":"True"
	I1107 17:17:48.755530  265599 node_ready.go:38] duration metric: took 6.650143ms waiting for node "pause-171530" to be "Ready" ...
	I1107 17:17:48.755544  265599 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:48.774070  265599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:17:45.119361  273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Running}}
	I1107 17:17:45.160545  273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Status}}
	I1107 17:17:45.191402  273963 cli_runner.go:164] Run: docker exec cilium-171301 stat /var/lib/dpkg/alternatives/iptables
	I1107 17:17:45.267825  273963 oci.go:144] the created container "cilium-171301" has a running status.
	I1107 17:17:45.267856  273963 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa...
	I1107 17:17:45.381762  273963 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 17:17:45.520399  273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Status}}
	I1107 17:17:45.581314  273963 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 17:17:45.581340  273963 kic_runner.go:114] Args: [docker exec --privileged cilium-171301 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 17:17:45.671973  273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Status}}
	I1107 17:17:45.703596  273963 machine.go:88] provisioning docker machine ...
	I1107 17:17:45.703639  273963 ubuntu.go:169] provisioning hostname "cilium-171301"
	I1107 17:17:45.703689  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:45.732869  273963 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:45.733123  273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I1107 17:17:45.733143  273963 main.go:134] libmachine: About to run SSH command:
	sudo hostname cilium-171301 && echo "cilium-171301" | sudo tee /etc/hostname
	I1107 17:17:45.878648  273963 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-171301
	
	I1107 17:17:45.878766  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:45.906394  273963 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:45.906551  273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I1107 17:17:45.906570  273963 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-171301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-171301/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-171301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 17:17:46.027393  273963 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 17:17:46.027440  273963 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15310-3679/.minikube CaCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15310-3679/.minikube}
	I1107 17:17:46.027464  273963 ubuntu.go:177] setting up certificates
	I1107 17:17:46.027474  273963 provision.go:83] configureAuth start
	I1107 17:17:46.027538  273963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-171301
	I1107 17:17:46.061281  273963 provision.go:138] copyHostCerts
	I1107 17:17:46.061348  273963 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem, removing ...
	I1107 17:17:46.061366  273963 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem
	I1107 17:17:46.061441  273963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem (1082 bytes)
	I1107 17:17:46.061560  273963 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem, removing ...
	I1107 17:17:46.061575  273963 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem
	I1107 17:17:46.061617  273963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem (1123 bytes)
	I1107 17:17:46.061749  273963 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem, removing ...
	I1107 17:17:46.061764  273963 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem
	I1107 17:17:46.061801  273963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem (1675 bytes)
	I1107 17:17:46.061863  273963 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem org=jenkins.cilium-171301 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-171301]
	I1107 17:17:46.253924  273963 provision.go:172] copyRemoteCerts
	I1107 17:17:46.253999  273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 17:17:46.254047  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:46.296985  273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
	I1107 17:17:46.384442  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 17:17:46.404309  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1107 17:17:46.427506  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 17:17:46.449504  273963 provision.go:86] duration metric: configureAuth took 422.011748ms
	I1107 17:17:46.449540  273963 ubuntu.go:193] setting minikube options for container-runtime
	I1107 17:17:46.449738  273963 config.go:180] Loaded profile config "cilium-171301": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:46.449813  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:46.481398  273963 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:46.481541  273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I1107 17:17:46.481555  273963 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 17:17:46.599328  273963 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 17:17:46.599354  273963 ubuntu.go:71] root file system type: overlay
	I1107 17:17:46.599539  273963 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 17:17:46.599598  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:46.629056  273963 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:46.629241  273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I1107 17:17:46.629343  273963 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 17:17:46.770161  273963 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 17:17:46.770248  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:46.799041  273963 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:46.799188  273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I1107 17:17:46.799207  273963 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 17:17:47.547232  273963 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 17:17:46.766442749 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1107 17:17:47.547272  273963 machine.go:91] provisioned docker machine in 1.84364984s
	I1107 17:17:47.547283  273963 client.go:171] LocalClient.Create took 7.317360133s
	I1107 17:17:47.547304  273963 start.go:167] duration metric: libmachine.API.Create for "cilium-171301" took 7.317430541s
	I1107 17:17:47.547312  273963 start.go:300] post-start starting for "cilium-171301" (driver="docker")
	I1107 17:17:47.547320  273963 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 17:17:47.547382  273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 17:17:47.547424  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:47.580680  273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
	I1107 17:17:47.670961  273963 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 17:17:47.674334  273963 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 17:17:47.674370  273963 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 17:17:47.674379  273963 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 17:17:47.674385  273963 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 17:17:47.674395  273963 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/addons for local assets ...
	I1107 17:17:47.674457  273963 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/files for local assets ...
	I1107 17:17:47.674531  273963 filesync.go:149] local asset: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem -> 101292.pem in /etc/ssl/certs
	I1107 17:17:47.674630  273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 17:17:47.682576  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /etc/ssl/certs/101292.pem (1708 bytes)
	I1107 17:17:47.702345  273963 start.go:303] post-start completed in 155.016776ms
	I1107 17:17:47.702863  273963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-171301
	I1107 17:17:47.729269  273963 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/config.json ...
	I1107 17:17:47.729653  273963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 17:17:47.729754  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:47.754933  273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
	I1107 17:17:47.839677  273963 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 17:17:47.843908  273963 start.go:128] duration metric: createHost completed in 7.617038008s
	I1107 17:17:47.843931  273963 start.go:83] releasing machines lock for "cilium-171301", held for 7.617622807s
	I1107 17:17:47.844011  273963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-171301
	I1107 17:17:47.870280  273963 ssh_runner.go:195] Run: systemctl --version
	I1107 17:17:47.870346  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:47.870364  273963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 17:17:47.870434  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:47.897797  273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
	I1107 17:17:47.898053  273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
	I1107 17:17:48.013979  273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 17:17:48.022299  273963 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1107 17:17:48.037257  273963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:17:48.110172  273963 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1107 17:17:48.198655  273963 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 17:17:48.210409  273963 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 17:17:48.210475  273963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 17:17:48.222331  273963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 17:17:48.238231  273963 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 17:17:48.324359  273963 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 17:17:48.401465  273963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:17:48.479636  273963 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 17:17:48.709599  273963 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 17:17:48.829234  273963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:17:48.915216  273963 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 17:17:48.926795  273963 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 17:17:48.926878  273963 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 17:17:48.930979  273963 start.go:472] Will wait 60s for crictl version
	I1107 17:17:48.931044  273963 ssh_runner.go:195] Run: sudo crictl version
	I1107 17:17:48.968172  273963 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 17:17:48.968235  273963 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 17:17:49.004145  273963 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 17:17:48.776053  265599 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 17:17:48.776086  265599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 17:17:48.776141  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:48.780418  265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 17:17:48.783994  265599 addons.go:227] Setting addon default-storageclass=true in "pause-171530"
	W1107 17:17:48.784033  265599 addons.go:236] addon default-storageclass should already be in state true
	I1107 17:17:48.784066  265599 host.go:66] Checking if "pause-171530" exists ...
	I1107 17:17:48.784533  265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
	I1107 17:17:48.791755  265599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.827118  265599 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 17:17:48.827146  265599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 17:17:48.827202  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:48.832614  265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
	I1107 17:17:48.844192  265599 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 17:17:48.858350  265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
	I1107 17:17:48.935269  265599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 17:17:48.958923  265599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 17:17:49.187938  265599 pod_ready.go:92] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:49.187970  265599 pod_ready.go:81] duration metric: took 396.174585ms waiting for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.187985  265599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.588753  265599 pod_ready.go:92] pod "etcd-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:49.588785  265599 pod_ready.go:81] duration metric: took 400.791096ms waiting for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.588799  265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.758403  265599 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1107 17:17:49.040144  273963 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 17:17:49.040219  273963 cli_runner.go:164] Run: docker network inspect cilium-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 17:17:49.069531  273963 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1107 17:17:49.072992  273963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 17:17:49.083058  273963 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 17:17:49.083116  273963 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 17:17:49.107581  273963 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 17:17:49.107611  273963 docker.go:543] Images already preloaded, skipping extraction
	I1107 17:17:49.107668  273963 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 17:17:49.133204  273963 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 17:17:49.133245  273963 cache_images.go:84] Images are preloaded, skipping loading
	I1107 17:17:49.133295  273963 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 17:17:49.206522  273963 cni.go:95] Creating CNI manager for "cilium"
	I1107 17:17:49.206553  273963 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 17:17:49.206574  273963 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-171301 NodeName:cilium-171301 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 17:17:49.206774  273963 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "cilium-171301"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 17:17:49.206866  273963 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cilium-171301 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I1107 17:17:49.206924  273963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 17:17:49.215024  273963 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 17:17:49.215106  273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 17:17:49.223091  273963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
	I1107 17:17:49.237727  273963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 17:17:49.251298  273963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
	I1107 17:17:49.265109  273963 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1107 17:17:49.268700  273963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 17:17:49.278537  273963 certs.go:54] Setting up /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301 for IP: 192.168.67.2
	I1107 17:17:49.278656  273963 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key
	I1107 17:17:49.278710  273963 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key
	I1107 17:17:49.278784  273963 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.key
	I1107 17:17:49.278798  273963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt with IP's: []
	I1107 17:17:49.377655  273963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt ...
	I1107 17:17:49.377689  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: {Name:mk85045205a0f3cc9db16d3ba4384eb58e4d4170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:49.377932  273963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.key ...
	I1107 17:17:49.377950  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.key: {Name:mk22ddbbc0c35976a622861a2537590ceb2c3529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:49.378071  273963 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e
	I1107 17:17:49.378101  273963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 17:17:49.717401  273963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e ...
	I1107 17:17:49.717449  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e: {Name:mk1d0b418ed1d3c777ce02b789369b0a0920bca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:49.717668  273963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e ...
	I1107 17:17:49.717686  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e: {Name:mkad3745d4acb3a4df279ae7d626aaef591fc7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:49.717800  273963 certs.go:320] copying /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt
	I1107 17:17:49.717875  273963 certs.go:324] copying /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key
	I1107 17:17:49.717938  273963 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key
	I1107 17:17:49.717957  273963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt with IP's: []
	I1107 17:17:49.788111  273963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt ...
	I1107 17:17:49.788144  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt: {Name:mk4ef43b9fbc1a2c60e066e8c2245294f6e4a088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:49.788346  273963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key ...
	I1107 17:17:49.788363  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key: {Name:mk3536bb270258df328f9904013708493e9e5cd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:49.788581  273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem (1338 bytes)
	W1107 17:17:49.788630  273963 certs.go:384] ignoring /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129_empty.pem, impossibly tiny 0 bytes
	I1107 17:17:49.788648  273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 17:17:49.788683  273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem (1082 bytes)
	I1107 17:17:49.788717  273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem (1123 bytes)
	I1107 17:17:49.788750  273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem (1675 bytes)
	I1107 17:17:49.788805  273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem (1708 bytes)
	I1107 17:17:49.789402  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 17:17:49.809402  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 17:17:49.828363  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 17:17:49.851556  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 17:17:49.875238  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 17:17:49.895507  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 17:17:49.917493  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 17:17:49.938898  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 17:17:49.958074  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 17:17:49.976967  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem --> /usr/share/ca-certificates/10129.pem (1338 bytes)
	I1107 17:17:49.997249  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /usr/share/ca-certificates/101292.pem (1708 bytes)
	I1107 17:17:50.022620  273963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 17:17:50.037986  273963 ssh_runner.go:195] Run: openssl version
	I1107 17:17:50.043912  273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10129.pem && ln -fs /usr/share/ca-certificates/10129.pem /etc/ssl/certs/10129.pem"
	I1107 17:17:50.052548  273963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10129.pem
	I1107 17:17:50.056053  273963 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/10129.pem
	I1107 17:17:50.056137  273963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10129.pem
	I1107 17:17:50.061307  273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10129.pem /etc/ssl/certs/51391683.0"
	I1107 17:17:50.069615  273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101292.pem && ln -fs /usr/share/ca-certificates/101292.pem /etc/ssl/certs/101292.pem"
	I1107 17:17:50.079805  273963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101292.pem
	I1107 17:17:50.084296  273963 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/101292.pem
	I1107 17:17:50.084356  273963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101292.pem
	I1107 17:17:50.090328  273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101292.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 17:17:50.099164  273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 17:17:50.110113  273963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:17:50.114343  273963 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:17:50.114408  273963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:17:50.120637  273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 17:17:50.130809  273963 kubeadm.go:396] StartCluster: {Name:cilium-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:17:50.130955  273963 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 17:17:50.158917  273963 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 17:17:50.166269  273963 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 17:17:50.174871  273963 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 17:17:50.174936  273963 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:17:50.184105  273963 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 17:17:50.184164  273963 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 17:17:50.239005  273963 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1107 17:17:50.239098  273963 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 17:17:50.279571  273963 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1107 17:17:50.279660  273963 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1107 17:17:50.279716  273963 kubeadm.go:317] OS: Linux
	I1107 17:17:50.279780  273963 kubeadm.go:317] CGROUPS_CPU: enabled
	I1107 17:17:50.279825  273963 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1107 17:17:50.279866  273963 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1107 17:17:50.279907  273963 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1107 17:17:50.279948  273963 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1107 17:17:50.279989  273963 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1107 17:17:50.280029  273963 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1107 17:17:50.280070  273963 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1107 17:17:50.280109  273963 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1107 17:17:50.359738  273963 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 17:17:50.359870  273963 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 17:17:50.359983  273963 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 17:17:50.504499  273963 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 17:17:49.760036  265599 addons.go:488] enableAddons completed in 1.028452371s
	I1107 17:17:49.988064  265599 pod_ready.go:92] pod "kube-apiserver-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:49.988085  265599 pod_ready.go:81] duration metric: took 399.27917ms waiting for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.988096  265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:50.387943  265599 pod_ready.go:92] pod "kube-controller-manager-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:50.387964  265599 pod_ready.go:81] duration metric: took 399.861996ms waiting for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:50.387975  265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:50.787240  265599 pod_ready.go:92] pod "kube-proxy-627q2" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:50.787266  265599 pod_ready.go:81] duration metric: took 399.283504ms waiting for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:50.787279  265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:51.187853  265599 pod_ready.go:92] pod "kube-scheduler-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:51.187885  265599 pod_ready.go:81] duration metric: took 400.597643ms waiting for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:51.187896  265599 pod_ready.go:38] duration metric: took 2.432339677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:51.187921  265599 api_server.go:51] waiting for apiserver process to appear ...
	I1107 17:17:51.187970  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:17:51.198604  265599 api_server.go:71] duration metric: took 2.467050632s to wait for apiserver process to appear ...
	I1107 17:17:51.198640  265599 api_server.go:87] waiting for apiserver healthz status ...
	I1107 17:17:51.198650  265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1107 17:17:51.203228  265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1107 17:17:51.204215  265599 api_server.go:140] control plane version: v1.25.3
	I1107 17:17:51.204244  265599 api_server.go:130] duration metric: took 5.597242ms to wait for apiserver health ...
	I1107 17:17:51.204255  265599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 17:17:51.389884  265599 system_pods.go:59] 7 kube-system pods found
	I1107 17:17:51.389918  265599 system_pods.go:61] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running
	I1107 17:17:51.389923  265599 system_pods.go:61] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running
	I1107 17:17:51.389927  265599 system_pods.go:61] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running
	I1107 17:17:51.389932  265599 system_pods.go:61] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running
	I1107 17:17:51.389936  265599 system_pods.go:61] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running
	I1107 17:17:51.389940  265599 system_pods.go:61] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
	I1107 17:17:51.389944  265599 system_pods.go:61] "storage-provisioner" [225d8eea-c00a-46a3-8b89-abb34458db76] Running
	I1107 17:17:51.389949  265599 system_pods.go:74] duration metric: took 185.688763ms to wait for pod list to return data ...
	I1107 17:17:51.389958  265599 default_sa.go:34] waiting for default service account to be created ...
	I1107 17:17:51.587856  265599 default_sa.go:45] found service account: "default"
	I1107 17:17:51.587885  265599 default_sa.go:55] duration metric: took 197.921282ms for default service account to be created ...
	I1107 17:17:51.587896  265599 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 17:17:51.791610  265599 system_pods.go:86] 7 kube-system pods found
	I1107 17:17:51.791656  265599 system_pods.go:89] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running
	I1107 17:17:51.791666  265599 system_pods.go:89] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running
	I1107 17:17:51.791683  265599 system_pods.go:89] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running
	I1107 17:17:51.791692  265599 system_pods.go:89] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running
	I1107 17:17:51.791699  265599 system_pods.go:89] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running
	I1107 17:17:51.791707  265599 system_pods.go:89] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
	I1107 17:17:51.791717  265599 system_pods.go:89] "storage-provisioner" [225d8eea-c00a-46a3-8b89-abb34458db76] Running
	I1107 17:17:51.791725  265599 system_pods.go:126] duration metric: took 203.823982ms to wait for k8s-apps to be running ...
	I1107 17:17:51.791734  265599 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 17:17:51.791785  265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:17:51.802112  265599 system_svc.go:56] duration metric: took 10.369415ms WaitForService to wait for kubelet.
	I1107 17:17:51.802147  265599 kubeadm.go:573] duration metric: took 3.070599627s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 17:17:51.802170  265599 node_conditions.go:102] verifying NodePressure condition ...
	I1107 17:17:51.987329  265599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 17:17:51.987365  265599 node_conditions.go:123] node cpu capacity is 8
	I1107 17:17:51.987379  265599 node_conditions.go:105] duration metric: took 185.202183ms to run NodePressure ...
	I1107 17:17:51.987392  265599 start.go:217] waiting for startup goroutines ...
	I1107 17:17:51.987763  265599 ssh_runner.go:195] Run: rm -f paused
	I1107 17:17:52.043023  265599 start.go:506] kubectl: 1.25.3, cluster: 1.25.3 (minor skew: 0)
	I1107 17:17:52.045707  265599 out.go:177] * Done! kubectl is now configured to use "pause-171530" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-11-07 17:15:39 UTC, end at Mon 2022-11-07 17:17:53 UTC. --
	Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.867503766Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 9b7990a4868df38640a1a4b501d3861a71b30b34429e7e3c19b6f85cd55e5664 708aac62fe16d29b27b7e03823a98eca3e1f022eaaaae07b03b614462c34f61c], retrying...."
	Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.947874677Z" level=info msg="Removing stale sandbox 08bdce8089e979563c1c35fc2b9cb00ca97ae33cb7c45028d6147314b55324da (6d1abd3e30d792833852b3f43c7effc3075f17e2807dee93ee5437621536102e)"
	Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.949913639Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 9b7990a4868df38640a1a4b501d3861a71b30b34429e7e3c19b6f85cd55e5664 f88327d8868c8ad0f7411a8b72ba2baa71bca468214ef9b295ee84ffe8afcc29], retrying...."
	Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.982329624Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.027920216Z" level=info msg="Loading containers: done."
	Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.040588525Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.040673581Z" level=info msg="Daemon has completed initialization"
	Nov 07 17:17:24 pause-171530 systemd[1]: Started Docker Application Container Engine.
	Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.057458789Z" level=info msg="API listen on [::]:2376"
	Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.061113478Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.523817703Z" level=info msg="ignoring event" container=7c093d736ba0305191d4e798ca0d308583b1c7463ad986b23c2d186951b7d0ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.529123877Z" level=info msg="ignoring event" container=42f2c39561b11166e1cca511011d19541e07606bda37d3d78a6b8d6324edba56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.531551274Z" level=info msg="ignoring event" container=c109021f97b0ec6487f090af18a20062a7df3c8845d39ce8fa8a5e3494da80ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.536407618Z" level=info msg="ignoring event" container=bc4811d3f9f168bfaa9567e8b88953c21b5104eba6a02a534a6b32e074d9a512 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.536463887Z" level=info msg="ignoring event" container=c9629a7195e0926d21d4aebeb78f3778a8379562c623cac143cfd8764639c395 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.537822290Z" level=info msg="ignoring event" container=cdc8d9ab8c016ad1726c8ec69dafffa0822704571646314f8f002d64229b9dcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.654013442Z" level=error msg="stream copy error: reading from a closed fifo"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.660623628Z" level=error msg="stream copy error: reading from a closed fifo"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.668658992Z" level=error msg="404d7bd895c853d22c917ec8770367d7a91dafd370c7b8959c3253e584e1eb5d cleanup: failed to delete container from containerd: no such container"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.671711028Z" level=error msg="9dc3075461e2264f083ac8045d0398e1cb1b95857a3a65126bf2c8178945eb02 cleanup: failed to delete container from containerd: no such container"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.683178370Z" level=error msg="d4737d2c0cc12722054c6a67e64adfcb09ac5d35405d5f62738a911f119801f2 cleanup: failed to delete container from containerd: no such container"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.730178512Z" level=error msg="1ca6e9485fa8aaf7657cec34a2aafba49fda2fe8d446b8f44f511ca7746e1c0d cleanup: failed to delete container from containerd: no such container"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.730223095Z" level=error msg="Handler for POST /v1.40/containers/1ca6e9485fa8aaf7657cec34a2aafba49fda2fe8d446b8f44f511ca7746e1c0d/start returned error: can't join IPC of container bc4811d3f9f168bfaa9567e8b88953c21b5104eba6a02a534a6b32e074d9a512: container bc4811d3f9f168bfaa9567e8b88953c21b5104eba6a02a534a6b32e074d9a512 is not running"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.734363189Z" level=error msg="ca313e60699e88a95aade29a7a771b01943787674653d827c9ac778c304b7ee2 cleanup: failed to delete container from containerd: no such container"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.889125639Z" level=error msg="b6069c474d48724ad6405cac869a299021de19f0e83735250a6669e95f84de98 cleanup: failed to delete container from containerd: no such container"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	4e27fc3536146       6e38f40d628db       3 seconds ago       Running             storage-provisioner       0                   869229924f7b0
	2678c07441af4       5185b96f0becf       17 seconds ago      Running             coredns                   2                   c8ae9930fd89e
	d128588b435c4       beaaf00edd38a       18 seconds ago      Running             kube-proxy                3                   308cd3b6261d9
	fa1fae9e3dd4c       6d23ec0e8b87e       22 seconds ago      Running             kube-scheduler            3                   499d52ff7ec2d
	9a2c93b7807eb       0346dbd74bcb9       22 seconds ago      Running             kube-apiserver            3                   ca7019d32208a
	c617e5f72b7e0       6039992312758       22 seconds ago      Running             kube-controller-manager   3                   b2be7ef781078
	240c58d21dba8       a8a176a5d5d69       22 seconds ago      Running             etcd                      3                   af4dddaaaab51
	b6069c474d487       5185b96f0becf       25 seconds ago      Created             coredns                   1                   cdc8d9ab8c016
	9dc3075461e22       0346dbd74bcb9       25 seconds ago      Created             kube-apiserver            2                   c109021f97b0e
	404d7bd895c85       6039992312758       25 seconds ago      Created             kube-controller-manager   2                   7c093d736ba03
	ca313e60699e8       6d23ec0e8b87e       25 seconds ago      Created             kube-scheduler            2                   42f2c39561b11
	1ca6e9485fa8a       a8a176a5d5d69       25 seconds ago      Created             etcd                      2                   bc4811d3f9f16
	d4737d2c0cc12       beaaf00edd38a       25 seconds ago      Created             kube-proxy                2                   c9629a7195e09
	
	* 
	* ==> coredns [2678c07441af] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = f3fde9de6486f59fe260f641c8b45d450960379ea9d73a7fef0c1feac6c746730bd77c72d2092518703e00d94c78d1eec0c6cb3efcd4dc489238241cea4bf436
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> coredns [b6069c474d48] <==
	* 
	* 
	* ==> describe nodes <==
	* Name:               pause-171530
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-171530
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262
	                    minikube.k8s.io/name=pause-171530
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_11_07T17_16_00_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Nov 2022 17:15:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-171530
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Nov 2022 17:17:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Nov 2022 17:17:35 +0000   Mon, 07 Nov 2022 17:15:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Nov 2022 17:17:35 +0000   Mon, 07 Nov 2022 17:15:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Nov 2022 17:17:35 +0000   Mon, 07 Nov 2022 17:15:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Nov 2022 17:17:35 +0000   Mon, 07 Nov 2022 17:17:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-171530
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 996614ec4c814b87b7ec8ebee3d0e8c9
	  System UUID:                584d8003-5974-4bad-ab15-c1a6d30346fa
	  Boot ID:                    08dd20cb-78b6-4f23-8a31-d42df46571b3
	  Kernel Version:             5.15.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.20
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-r6gbf                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     101s
	  kube-system                 etcd-pause-171530                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         113s
	  kube-system                 kube-apiserver-pause-171530             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-controller-manager-pause-171530    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-proxy-627q2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-scheduler-pause-171530             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 98s                  kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m5s (x4 over 2m5s)  kubelet          Node pause-171530 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m5s (x4 over 2m5s)  kubelet          Node pause-171530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m5s (x4 over 2m5s)  kubelet          Node pause-171530 status is now: NodeHasSufficientMemory
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node pause-171530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node pause-171530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node pause-171530 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             113s                 kubelet          Node pause-171530 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  113s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                103s                 kubelet          Node pause-171530 status is now: NodeReady
	  Normal  RegisteredNode           101s                 node-controller  Node pause-171530 event: Registered Node pause-171530 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node pause-171530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node pause-171530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node pause-171530 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                   node-controller  Node pause-171530 event: Registered Node pause-171530 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.004797] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006797] FS-Cache: O-cookie d=00000000b1e64776{9p.inode} n=0000000007b82556
	[  +0.007369] FS-Cache: O-key=[8] '7fa00f0200000000'
	[  +0.004936] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006594] FS-Cache: N-cookie d=00000000b1e64776{9p.inode} n=000000001524e9eb
	[  +0.008729] FS-Cache: N-key=[8] '7fa00f0200000000'
	[  +0.488901] FS-Cache: Duplicate cookie detected
	[  +0.004717] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006779] FS-Cache: O-cookie d=00000000b1e64776{9p.inode} n=000000004d15690e
	[  +0.007381] FS-Cache: O-key=[8] '8ea00f0200000000'
	[  +0.004952] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006607] FS-Cache: N-cookie d=00000000b1e64776{9p.inode} n=00000000470ffc24
	[  +0.008833] FS-Cache: N-key=[8] '8ea00f0200000000'
	[Nov 7 16:54] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 7 17:05] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
	[  +0.000007] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
	[  +1.008285] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
	[  +0.000005] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
	[  +2.011837] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
	[  +0.000035] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
	[Nov 7 17:06] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
	[  +0.000011] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
	[  +8.191212] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
	[  +0.000044] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
	[Nov 7 17:14] process 'docker/tmp/qemu-check072764330/check' started with executable stack
	
	* 
	* ==> etcd [1ca6e9485fa8] <==
	* 
	* 
	* ==> etcd [240c58d21dba] <==
	* {"level":"info","ts":"2022-11-07T17:17:31.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2022-11-07T17:17:31.626Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2022-11-07T17:17:31.626Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-07T17:17:31.626Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 3"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 3"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 4"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 4"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-171530 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-11-07T17:17:33.044Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2022-11-07T17:17:33.044Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-11-07T17:17:43.326Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"152.41473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-565d847f94-r6gbf\" ","response":"range_response_count:1 size:5038"}
	{"level":"info","ts":"2022-11-07T17:17:43.326Z","caller":"traceutil/trace.go:171","msg":"trace[1276518897] range","detail":"{range_begin:/registry/pods/kube-system/coredns-565d847f94-r6gbf; range_end:; response_count:1; response_revision:452; }","duration":"152.549915ms","start":"2022-11-07T17:17:43.174Z","end":"2022-11-07T17:17:43.326Z","steps":["trace[1276518897] 'agreement among raft nodes before linearized reading'  (duration: 40.877163ms)","trace[1276518897] 'range keys from in-memory index tree'  (duration: 111.462423ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  17:17:53 up  1:00,  0 users,  load average: 3.46, 3.53, 2.55
	Linux pause-171530 5.15.0-1021-gcp #28~20.04.1-Ubuntu SMP Mon Oct 17 11:37:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [9a2c93b7807e] <==
	* I1107 17:17:34.912107       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1107 17:17:34.912439       1 controller.go:83] Starting OpenAPI AggregationController
	I1107 17:17:34.912469       1 available_controller.go:491] Starting AvailableConditionController
	I1107 17:17:34.912477       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I1107 17:17:34.912134       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1107 17:17:34.912451       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I1107 17:17:34.920676       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1107 17:17:34.912711       1 controller.go:85] Starting OpenAPI controller
	I1107 17:17:35.019428       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1107 17:17:35.019719       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1107 17:17:35.020233       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1107 17:17:35.019789       1 cache.go:39] Caches are synced for autoregister controller
	I1107 17:17:35.020532       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1107 17:17:35.020562       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I1107 17:17:35.021059       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1107 17:17:35.038005       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 17:17:35.688505       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1107 17:17:35.915960       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1107 17:17:36.540683       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1107 17:17:36.550675       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1107 17:17:36.580888       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1107 17:17:36.641282       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 17:17:36.648284       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1107 17:17:47.967954       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1107 17:17:48.027205       1 controller.go:616] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [9dc3075461e2] <==
	* 
	* 
	* ==> kube-controller-manager [404d7bd895c8] <==
	* 
	* 
	* ==> kube-controller-manager [c617e5f72b7e] <==
	* I1107 17:17:48.008590       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I1107 17:17:48.008778       1 event.go:294] "Event occurred" object="pause-171530" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-171530 event: Registered Node pause-171530 in Controller"
	I1107 17:17:48.008734       1 taint_manager.go:209] "Sending events to api server"
	W1107 17:17:48.008888       1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-171530. Assuming now as a timestamp.
	I1107 17:17:48.008920       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1107 17:17:48.019368       1 shared_informer.go:262] Caches are synced for namespace
	I1107 17:17:48.020195       1 shared_informer.go:262] Caches are synced for node
	I1107 17:17:48.020224       1 range_allocator.go:166] Starting range CIDR allocator
	I1107 17:17:48.020230       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1107 17:17:48.020269       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1107 17:17:48.022046       1 shared_informer.go:262] Caches are synced for expand
	I1107 17:17:48.023995       1 shared_informer.go:262] Caches are synced for attach detach
	I1107 17:17:48.028885       1 shared_informer.go:262] Caches are synced for daemon sets
	I1107 17:17:48.040695       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1107 17:17:48.059583       1 shared_informer.go:262] Caches are synced for disruption
	I1107 17:17:48.093865       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1107 17:17:48.094015       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1107 17:17:48.094994       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1107 17:17:48.095030       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1107 17:17:48.152712       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1107 17:17:48.181359       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 17:17:48.224684       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 17:17:48.538831       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 17:17:48.624372       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 17:17:48.624404       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [d128588b435c] <==
	* I1107 17:17:35.802654       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I1107 17:17:35.802795       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I1107 17:17:35.802838       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1107 17:17:35.823572       1 server_others.go:206] "Using iptables Proxier"
	I1107 17:17:35.823628       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1107 17:17:35.823641       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1107 17:17:35.823661       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1107 17:17:35.823700       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 17:17:35.823862       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 17:17:35.824181       1 server.go:661] "Version info" version="v1.25.3"
	I1107 17:17:35.824201       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 17:17:35.824705       1 config.go:226] "Starting endpoint slice config controller"
	I1107 17:17:35.824729       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1107 17:17:35.824729       1 config.go:317] "Starting service config controller"
	I1107 17:17:35.824742       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1107 17:17:35.824785       1 config.go:444] "Starting node config controller"
	I1107 17:17:35.824797       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1107 17:17:35.925677       1 shared_informer.go:262] Caches are synced for node config
	I1107 17:17:35.925674       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1107 17:17:35.925738       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [d4737d2c0cc1] <==
	* 
	* 
	* ==> kube-scheduler [ca313e60699e] <==
	* 
	* 
	* ==> kube-scheduler [fa1fae9e3dd4] <==
	* I1107 17:17:32.057264       1 serving.go:348] Generated self-signed cert in-memory
	W1107 17:17:34.927696       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1107 17:17:34.927730       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 17:17:34.927742       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 17:17:34.927752       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 17:17:35.026876       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1107 17:17:35.026910       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 17:17:35.028404       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1107 17:17:35.032408       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1107 17:17:35.032445       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1107 17:17:35.049814       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 17:17:35.150068       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 17:15:39 UTC, end at Mon 2022-11-07 17:17:53 UTC. --
	Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.471796    5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
	Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.572533    5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
	Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.673100    5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
	Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.773944    5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
	Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.874639    5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.019502    5996 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.020405    5996 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.026112    5996 apiserver.go:52] "Watching apiserver"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.028911    5996 topology_manager.go:205] "Topology Admit Handler"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.029237    5996 topology_manager.go:205] "Topology Admit Handler"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.034807    5996 kubelet_node_status.go:108] "Node was previously registered" node="pause-171530"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.034917    5996 kubelet_node_status.go:73] "Successfully registered node" node="pause-171530"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044165    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/177a31d0-df11-4105-9f5a-c3effe2fc965-xtables-lock\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044237    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxrf\" (UniqueName: \"kubernetes.io/projected/177a31d0-df11-4105-9f5a-c3effe2fc965-kube-api-access-xlxrf\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044387    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kpcd\" (UniqueName: \"kubernetes.io/projected/4070c2b0-f450-4494-afc9-30615ea8f3c9-kube-api-access-2kpcd\") pod \"coredns-565d847f94-r6gbf\" (UID: \"4070c2b0-f450-4494-afc9-30615ea8f3c9\") " pod="kube-system/coredns-565d847f94-r6gbf"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044450    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/177a31d0-df11-4105-9f5a-c3effe2fc965-lib-modules\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044482    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4070c2b0-f450-4494-afc9-30615ea8f3c9-config-volume\") pod \"coredns-565d847f94-r6gbf\" (UID: \"4070c2b0-f450-4494-afc9-30615ea8f3c9\") " pod="kube-system/coredns-565d847f94-r6gbf"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044514    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/177a31d0-df11-4105-9f5a-c3effe2fc965-kube-proxy\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044543    5996 reconciler.go:169] "Reconciler: start to sync state"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.630307    5996 scope.go:115] "RemoveContainer" containerID="d4737d2c0cc12722054c6a67e64adfcb09ac5d35405d5f62738a911f119801f2"
	Nov 07 17:17:37 pause-171530 kubelet[5996]: I1107 17:17:37.800520    5996 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Nov 07 17:17:44 pause-171530 kubelet[5996]: I1107 17:17:44.973868    5996 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Nov 07 17:17:49 pause-171530 kubelet[5996]: I1107 17:17:49.752701    5996 topology_manager.go:205] "Topology Admit Handler"
	Nov 07 17:17:49 pause-171530 kubelet[5996]: I1107 17:17:49.934212    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pv8z\" (UniqueName: \"kubernetes.io/projected/225d8eea-c00a-46a3-8b89-abb34458db76-kube-api-access-4pv8z\") pod \"storage-provisioner\" (UID: \"225d8eea-c00a-46a3-8b89-abb34458db76\") " pod="kube-system/storage-provisioner"
	Nov 07 17:17:49 pause-171530 kubelet[5996]: I1107 17:17:49.934319    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/225d8eea-c00a-46a3-8b89-abb34458db76-tmp\") pod \"storage-provisioner\" (UID: \"225d8eea-c00a-46a3-8b89-abb34458db76\") " pod="kube-system/storage-provisioner"
	
	* 
	* ==> storage-provisioner [4e27fc353614] <==
	* I1107 17:17:50.349388       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 17:17:50.361550       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 17:17:50.361616       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 17:17:50.369430       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 17:17:50.369585       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"892faada-f17d-4afd-8626-0abe858770d6", APIVersion:"v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-171530_dd9a8022-cc81-49c9-8ac6-84a620a09cdb became leader
	I1107 17:17:50.369661       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-171530_dd9a8022-cc81-49c9-8ac6-84a620a09cdb!
	I1107 17:17:50.470629       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-171530_dd9a8022-cc81-49c9-8ac6-84a620a09cdb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-171530 -n pause-171530

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
helpers_test.go:261: (dbg) Run:  kubectl --context pause-171530 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-171530 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-171530 describe pod : exit status 1 (59.960205ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-171530 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-171530
helpers_test.go:235: (dbg) docker inspect pause-171530:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550",
	        "Created": "2022-11-07T17:15:38.935447727Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 241803,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-07T17:15:39.387509554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/hostname",
	        "HostsPath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/hosts",
	        "LogPath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550-json.log",
	        "Name": "/pause-171530",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-171530:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-171530",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886-init/diff:/var/lib/docker/overlay2/2fd1fc00a589bf61b81b15f5596b1c421509b0ed94a0073de8df35851e0104fd/diff:/var/lib/docker/overlay2/ca94f1e5c7c58ab040213044ce029a51c1ea19ec2ae58d30e36b7c461dac5b75/diff:/var/lib/docker/overlay2/e42a9a60bb0ccca9f6ebc3bec24f638bafba48d604bd99af2d43cee1225c9466/diff:/var/lib/docker/overlay2/3474eef000daf16045ddcd082155e02d3adc432e026d93a79f6650da6b7bbe2c/diff:/var/lib/docker/overlay2/2c37502622a619527bab9f0e94b3c9e8ea823ff6ffdc84760dfeca0a7a1d2ba9/diff:/var/lib/docker/overlay2/c89ceddb787dc6015274fbee4e47c019bcb7637c523d5d053aafccc75f2d8c5b/diff:/var/lib/docker/overlay2/d13aa31ebe50e77225149ff2f5361d34b4b4dcbeb3b0bc0a15e35f3d4a8b7756/diff:/var/lib/docker/overlay2/c95f6f4ff58fc27002c40206891dabcbf4ed1b39c8f3584432f15b72a15920c1/diff:/var/lib/docker/overlay2/609367ca657fad1a480fd0d0075ab9d34c5556928b3f753bf75b7937a8b74ee8/diff:/var/lib/docker/overlay2/02a742
81aea9f2e787ac6f6c4ac9f7d01ae11e33439e4787dff010ca49918d6b/diff:/var/lib/docker/overlay2/97be1349403116decda81fc5f089a2db445d4c5a72b26e4fa1d2d69bc8f5b867/diff:/var/lib/docker/overlay2/0a0a5163f70151b385895e742fd238ec8e8e4f76def9c619677619db2a6d5b08/diff:/var/lib/docker/overlay2/5659ee0023498bf40cbbec8f9a2f0fddfc95419655c96d6605a451a2c46c6036/diff:/var/lib/docker/overlay2/490c47e44446d2723d18ba6ae67ce415128dbc5fd055c8b0c3af734b0a072691/diff:/var/lib/docker/overlay2/303dd4de2e78ffebe2a8b0327ff89f434f0d94efec1239397b26f584669c6688/diff:/var/lib/docker/overlay2/57cd5e60d0e6efc4eba5b1d3312be411722b2dbe779b38d7e29451cb53536ed6/diff:/var/lib/docker/overlay2/ebe05a325862fb9343e31e938f8b0cbebb9eac74b601c1cbd7c51d82932d20b4/diff:/var/lib/docker/overlay2/8536312e6228bdf272e430339824f16762dc9bb32d3fbcd5a2704ed1cbd37e64/diff:/var/lib/docker/overlay2/2598be8b2bb739fc75e87aee71f5af665456fffb16f599676335c74f15ae6391/diff:/var/lib/docker/overlay2/4d2d35e9d340ea3932b4095e279f70853bcd0793bb323921891c0c769627f2c5/diff:/var/lib/d
ocker/overlay2/4d826174051f4f89d8c7f9e2a1c0deeedf4fe1375b7e4805b1507830dfcb85eb/diff:/var/lib/docker/overlay2/04619ad2580acc4047033104b728374c0bcab41b326af981fd92107ded6f8715/diff:/var/lib/docker/overlay2/653c7b7d9b3ff747507ce6d4c8750195142e3c1e5dd8776d1f5ad68da192b0c3/diff:/var/lib/docker/overlay2/7feba1b41892a093a69f3006a5955540f607a8c16986fd594da627470dc20b50/diff:/var/lib/docker/overlay2/edfa060eb3735b8c7368bfa84da65c47f0381d016fcb1f23338cbe984ffb4309/diff:/var/lib/docker/overlay2/7bc7096889faa87a4f3542932b25941d0cb3ebdca2eb7a8323c0b437c946ca84/diff:/var/lib/docker/overlay2/6d9c19e156f90bc4ce093d160661251be6f95a51a9e0712f2a79c6a08cd996cd/diff:/var/lib/docker/overlay2/f5ba9cd7997e8cdfc6fb27c76c069767b07cc8201e7e0ef7c1a3ffa443525fb1/diff:/var/lib/docker/overlay2/43277eab35f847188e2fbacd196549314d6463948690b6eb7218cfe6ecc19b17/diff:/var/lib/docker/overlay2/ef090d552b4022f86d7bdf79bbc298e347a3e535c804f65b2d33683e0864901d/diff:/var/lib/docker/overlay2/8ef9f5644e2d99ddd144a8c44988dff320901634fa10fdd2ceb63b44464
942d2/diff:/var/lib/docker/overlay2/8db604496435b1f4a13ceca647b7f365eccc2122c46c001b46d3343020dce882/diff:/var/lib/docker/overlay2/aa63ff25f14d23e22d30a5f6ffdca4dc610d3a56fda7fcf8128955229e8179ac/diff:/var/lib/docker/overlay2/d8e836f399115dec3f57c3bdae8cfe9459ca00fb4db1619f7c32a54c17f2696a/diff:/var/lib/docker/overlay2/e8706f9f543307c51f76840c008a49519273628b367c558c81472382319ee067/diff:/var/lib/docker/overlay2/410562df42124ab024d1aed6c452424839223794de2fac149e33e3a2aaad7db5/diff:/var/lib/docker/overlay2/24ba0b84d34cf83f31c6e6420465d970cd940052bc918b875c8320dfbeccb3fc/diff:/var/lib/docker/overlay2/cfd31a3b8ba33133312104bac0d05c9334975dd18cb3dfff6ba901668d8935cb/diff:/var/lib/docker/overlay2/2bfc0a7a2746e54d77a9a1838e077ca17b8bd024966ed7fc7f4cfceffc1e41c9/diff:/var/lib/docker/overlay2/67ae264c7fe2b9c7f659d1bbdccdc178c34230e3b6aa815b7f3ff24d50f1ca5a/diff:/var/lib/docker/overlay2/2f921d0a0caaca67918401f3f9b193c0e89b931f174e447a79ba82b2a5743c6e/diff:/var/lib/docker/overlay2/8f6f97c7885b0f2745adf21261ead041f0b7ce
88d0ab325cfafd1cf3b9aa07f3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886/merged",
	                "UpperDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886/diff",
	                "WorkDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-171530",
	                "Source": "/var/lib/docker/volumes/pause-171530/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-171530",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-171530",
	                "name.minikube.sigs.k8s.io": "pause-171530",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9adb1a46308a44769722d4564542b00b60699767153f3cfdcf9adf8a13796ed",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49369"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49368"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49365"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49367"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49366"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a9adb1a46308",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-171530": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e3da15937387",
	                        "pause-171530"
	                    ],
	                    "NetworkID": "39ab6118a516dd29e38bb2d528840c29808f0aaff829c163fb133591392f975d",
	                    "EndpointID": "f05b8ecc16b4a46e2d24102363dbe97c03cc31d021c5d068a263b87ac53329f9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-171530 -n pause-171530
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-171530 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-171530 logs -n 25: (1.167744437s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | cert-options-171318 ssh               | cert-options-171318       | jenkins | v1.28.0 | 07 Nov 22 17:13 UTC | 07 Nov 22 17:13 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-171318 -- sudo        | cert-options-171318       | jenkins | v1.28.0 | 07 Nov 22 17:13 UTC | 07 Nov 22 17:13 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-171318                | cert-options-171318       | jenkins | v1.28.0 | 07 Nov 22 17:13 UTC | 07 Nov 22 17:13 UTC |
	| ssh     | docker-flags-171335 ssh               | docker-flags-171335       | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-171335 ssh               | docker-flags-171335       | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-171335                | docker-flags-171335       | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
	| start   | -p kubernetes-upgrade-171418          | kubernetes-upgrade-171418 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-171351             | missing-upgrade-171351    | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:15 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-171343             | stopped-upgrade-171343    | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:15 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-171418          | kubernetes-upgrade-171418 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:15 UTC |
	| delete  | -p stopped-upgrade-171343             | stopped-upgrade-171343    | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:15 UTC |
	| start   | -p kubernetes-upgrade-171418          | kubernetes-upgrade-171418 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-171351             | missing-upgrade-171351    | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:15 UTC |
	| start   | -p pause-171530 --memory=2048         | pause-171530              | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:17 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker            |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p cert-expiration-171219             | cert-expiration-171219    | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:16 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p running-upgrade-171507             | running-upgrade-171507    | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-171507             | running-upgrade-171507    | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:16 UTC |
	| start   | -p auto-171300 --memory=2048          | auto-171300               | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:17 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m         |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-171219             | cert-expiration-171219    | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:16 UTC |
	| start   | -p kindnet-171300                     | kindnet-171300            | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m         |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker         |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p pause-171530                       | pause-171530              | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171300 pgrep -a            | kindnet-171300            | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-171300                     | kindnet-171300            | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
	| start   | -p cilium-171301 --memory=2048        | cilium-171301             | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m         |                           |         |         |                     |                     |
	|         | --cni=cilium --driver=docker          |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | -p auto-171300 pgrep -a               | auto-171300               | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 17:17:39
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 17:17:39.909782  273963 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:17:39.909910  273963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:17:39.909920  273963 out.go:309] Setting ErrFile to fd 2...
	I1107 17:17:39.909925  273963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:17:39.910036  273963 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-3679/.minikube/bin
	I1107 17:17:39.910611  273963 out.go:303] Setting JSON to false
	I1107 17:17:39.912756  273963 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3611,"bootTime":1667837849,"procs":1171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 17:17:39.912825  273963 start.go:126] virtualization: kvm guest
	I1107 17:17:39.916343  273963 out.go:177] * [cilium-171301] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 17:17:39.918167  273963 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 17:17:39.918122  273963 notify.go:220] Checking for updates...
	I1107 17:17:39.919930  273963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 17:17:39.921709  273963 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	I1107 17:17:39.923329  273963 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	I1107 17:17:39.924851  273963 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 17:17:39.927024  273963 config.go:180] Loaded profile config "auto-171300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:39.927142  273963 config.go:180] Loaded profile config "kubernetes-upgrade-171418": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:39.927235  273963 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:39.927287  273963 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 17:17:39.959963  273963 docker.go:137] docker version: linux-20.10.21
	I1107 17:17:39.960043  273963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:17:40.066046  273963 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-07 17:17:39.981648038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:17:40.066199  273963 docker.go:254] overlay module found
	I1107 17:17:40.069246  273963 out.go:177] * Using the docker driver based on user configuration
	I1107 17:17:40.070821  273963 start.go:282] selected driver: docker
	I1107 17:17:40.070848  273963 start.go:808] validating driver "docker" against <nil>
	I1107 17:17:40.070871  273963 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 17:17:40.072076  273963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:17:40.184024  273963 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-07 17:17:40.095572549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:17:40.184162  273963 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 17:17:40.184327  273963 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 17:17:40.186905  273963 out.go:177] * Using Docker driver with root privileges
	I1107 17:17:40.188888  273963 cni.go:95] Creating CNI manager for "cilium"
	I1107 17:17:40.188919  273963 start_flags.go:312] Found "Cilium" CNI - setting NetworkPlugin=cni
	I1107 17:17:40.188929  273963 start_flags.go:317] config:
	{Name:cilium-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:
cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:17:40.191042  273963 out.go:177] * Starting control plane node cilium-171301 in cluster cilium-171301
	I1107 17:17:40.192756  273963 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 17:17:40.194622  273963 out.go:177] * Pulling base image ...
	I1107 17:17:40.196366  273963 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 17:17:40.196424  273963 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 17:17:40.196439  273963 cache.go:57] Caching tarball of preloaded images
	I1107 17:17:40.196478  273963 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 17:17:40.196755  273963 preload.go:174] Found /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 17:17:40.196770  273963 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 17:17:40.196994  273963 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/config.json ...
	I1107 17:17:40.197037  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/config.json: {Name:mke8d5318de654621f86e157b3b792411142e89b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:40.226030  273963 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 17:17:40.226064  273963 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 17:17:40.226085  273963 cache.go:208] Successfully downloaded all kic artifacts
	I1107 17:17:40.226119  273963 start.go:364] acquiring machines lock for cilium-171301: {Name:mk73a4f694f74dc8530831944bb92040f98c814b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 17:17:40.226272  273963 start.go:368] acquired machines lock for "cilium-171301" in 128.513µs
	I1107 17:17:40.226338  273963 start.go:93] Provisioning new machine with config: &{Name:cilium-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 17:17:40.226851  273963 start.go:125] createHost starting for "" (driver="docker")
	I1107 17:17:35.925106  265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1107 17:17:35.931883  265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 17:17:35.931924  265599 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 17:17:36.424461  265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1107 17:17:36.430147  265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1107 17:17:36.437609  265599 api_server.go:140] control plane version: v1.25.3
	I1107 17:17:36.437636  265599 api_server.go:130] duration metric: took 4.709684273s to wait for apiserver health ...
	I1107 17:17:36.437645  265599 cni.go:95] Creating CNI manager for ""
	I1107 17:17:36.437652  265599 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 17:17:36.437659  265599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 17:17:36.447744  265599 system_pods.go:59] 6 kube-system pods found
	I1107 17:17:36.447788  265599 system_pods.go:61] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 17:17:36.447801  265599 system_pods.go:61] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 17:17:36.447812  265599 system_pods.go:61] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 17:17:36.447823  265599 system_pods.go:61] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 17:17:36.447833  265599 system_pods.go:61] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 17:17:36.447851  265599 system_pods.go:61] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
	I1107 17:17:36.447860  265599 system_pods.go:74] duration metric: took 10.195758ms to wait for pod list to return data ...
	I1107 17:17:36.447873  265599 node_conditions.go:102] verifying NodePressure condition ...
	I1107 17:17:36.452085  265599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 17:17:36.452127  265599 node_conditions.go:123] node cpu capacity is 8
	I1107 17:17:36.452142  265599 node_conditions.go:105] duration metric: took 4.263555ms to run NodePressure ...
	I1107 17:17:36.452169  265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 17:17:36.655569  265599 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1107 17:17:36.659806  265599 kubeadm.go:778] kubelet initialised
	I1107 17:17:36.659830  265599 kubeadm.go:779] duration metric: took 4.236781ms waiting for restarted kubelet to initialise ...
	I1107 17:17:36.659837  265599 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:36.664724  265599 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:38.678405  265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
	I1107 17:17:39.764430  254808 pod_ready.go:92] pod "coredns-565d847f94-zscpb" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:39.764470  254808 pod_ready.go:81] duration metric: took 37.51089729s waiting for pod "coredns-565d847f94-zscpb" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.764489  254808 pod_ready.go:78] waiting up to 5m0s for pod "etcd-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.769704  254808 pod_ready.go:92] pod "etcd-auto-171300" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:39.769729  254808 pod_ready.go:81] duration metric: took 5.228844ms waiting for pod "etcd-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.769741  254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.774830  254808 pod_ready.go:92] pod "kube-apiserver-auto-171300" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:39.774850  254808 pod_ready.go:81] duration metric: took 5.101563ms waiting for pod "kube-apiserver-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.774863  254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.779742  254808 pod_ready.go:92] pod "kube-controller-manager-auto-171300" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:39.779767  254808 pod_ready.go:81] duration metric: took 4.895957ms waiting for pod "kube-controller-manager-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.779780  254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-5hjzb" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.787718  254808 pod_ready.go:92] pod "kube-proxy-5hjzb" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:39.787745  254808 pod_ready.go:81] duration metric: took 7.956771ms waiting for pod "kube-proxy-5hjzb" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:39.787759  254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:40.161780  254808 pod_ready.go:92] pod "kube-scheduler-auto-171300" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:40.161804  254808 pod_ready.go:81] duration metric: took 374.038459ms waiting for pod "kube-scheduler-auto-171300" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:40.161812  254808 pod_ready.go:38] duration metric: took 39.930959656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:40.161836  254808 api_server.go:51] waiting for apiserver process to appear ...
	I1107 17:17:40.161880  254808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:17:40.174326  254808 api_server.go:71] duration metric: took 40.098096653s to wait for apiserver process to appear ...
	I1107 17:17:40.174356  254808 api_server.go:87] waiting for apiserver healthz status ...
	I1107 17:17:40.174385  254808 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1107 17:17:40.180459  254808 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1107 17:17:40.181698  254808 api_server.go:140] control plane version: v1.25.3
	I1107 17:17:40.181729  254808 api_server.go:130] duration metric: took 7.366556ms to wait for apiserver health ...
	I1107 17:17:40.181739  254808 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 17:17:40.365251  254808 system_pods.go:59] 7 kube-system pods found
	I1107 17:17:40.365291  254808 system_pods.go:61] "coredns-565d847f94-zscpb" [a8e008dc-4166-4449-8182-2d5998d7e35a] Running
	I1107 17:17:40.365298  254808 system_pods.go:61] "etcd-auto-171300" [b26c6dee-c57a-4455-bf34-57e8d4bdae28] Running
	I1107 17:17:40.365305  254808 system_pods.go:61] "kube-apiserver-auto-171300" [9702725f-76a4-4828-ba51-3bd1bd31c921] Running
	I1107 17:17:40.365313  254808 system_pods.go:61] "kube-controller-manager-auto-171300" [a2722655-640b-4f80-8ecc-0cb3abbc73e1] Running
	I1107 17:17:40.365320  254808 system_pods.go:61] "kube-proxy-5hjzb" [e3111b6a-3730-47f4-b80e-fa872011b18d] Running
	I1107 17:17:40.365326  254808 system_pods.go:61] "kube-scheduler-auto-171300" [49b194d9-1c66-4db1-964c-72958b48a969] Running
	I1107 17:17:40.365341  254808 system_pods.go:61] "storage-provisioner" [af36ca23-ffa5-4472-b090-7e646b93034c] Running
	I1107 17:17:40.365353  254808 system_pods.go:74] duration metric: took 183.607113ms to wait for pod list to return data ...
	I1107 17:17:40.365368  254808 default_sa.go:34] waiting for default service account to be created ...
	I1107 17:17:40.561571  254808 default_sa.go:45] found service account: "default"
	I1107 17:17:40.561596  254808 default_sa.go:55] duration metric: took 196.218934ms for default service account to be created ...
	I1107 17:17:40.561604  254808 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 17:17:40.765129  254808 system_pods.go:86] 7 kube-system pods found
	I1107 17:17:40.765166  254808 system_pods.go:89] "coredns-565d847f94-zscpb" [a8e008dc-4166-4449-8182-2d5998d7e35a] Running
	I1107 17:17:40.765200  254808 system_pods.go:89] "etcd-auto-171300" [b26c6dee-c57a-4455-bf34-57e8d4bdae28] Running
	I1107 17:17:40.765210  254808 system_pods.go:89] "kube-apiserver-auto-171300" [9702725f-76a4-4828-ba51-3bd1bd31c921] Running
	I1107 17:17:40.765218  254808 system_pods.go:89] "kube-controller-manager-auto-171300" [a2722655-640b-4f80-8ecc-0cb3abbc73e1] Running
	I1107 17:17:40.765225  254808 system_pods.go:89] "kube-proxy-5hjzb" [e3111b6a-3730-47f4-b80e-fa872011b18d] Running
	I1107 17:17:40.765231  254808 system_pods.go:89] "kube-scheduler-auto-171300" [49b194d9-1c66-4db1-964c-72958b48a969] Running
	I1107 17:17:40.765237  254808 system_pods.go:89] "storage-provisioner" [af36ca23-ffa5-4472-b090-7e646b93034c] Running
	I1107 17:17:40.765245  254808 system_pods.go:126] duration metric: took 203.635578ms to wait for k8s-apps to be running ...
	I1107 17:17:40.765255  254808 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 17:17:40.765298  254808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:17:40.776269  254808 system_svc.go:56] duration metric: took 11.004445ms WaitForService to wait for kubelet.
	I1107 17:17:40.776304  254808 kubeadm.go:573] duration metric: took 40.700080633s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 17:17:40.776325  254808 node_conditions.go:102] verifying NodePressure condition ...
	I1107 17:17:40.962904  254808 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 17:17:40.962940  254808 node_conditions.go:123] node cpu capacity is 8
	I1107 17:17:40.962955  254808 node_conditions.go:105] duration metric: took 186.624576ms to run NodePressure ...
	I1107 17:17:40.962972  254808 start.go:217] waiting for startup goroutines ...
	I1107 17:17:40.963411  254808 ssh_runner.go:195] Run: rm -f paused
	I1107 17:17:41.016064  254808 start.go:506] kubectl: 1.25.3, cluster: 1.25.3 (minor skew: 0)
	I1107 17:17:41.019135  254808 out.go:177] * Done! kubectl is now configured to use "auto-171300" cluster and "default" namespace by default
	I1107 17:17:38.938491  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 17:17:38.966502  233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
	I1107 17:17:38.966589  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 17:17:38.992316  233006 logs.go:274] 1 containers: [6fec17665e36]
	I1107 17:17:38.992406  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 17:17:39.018933  233006 logs.go:274] 0 containers: []
	W1107 17:17:39.018962  233006 logs.go:276] No container was found matching "coredns"
	I1107 17:17:39.019012  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 17:17:39.046418  233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
	I1107 17:17:39.046497  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 17:17:39.072173  233006 logs.go:274] 0 containers: []
	W1107 17:17:39.072208  233006 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:17:39.072257  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 17:17:39.098237  233006 logs.go:274] 0 containers: []
	W1107 17:17:39.098266  233006 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:17:39.098309  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 17:17:39.124960  233006 logs.go:274] 0 containers: []
	W1107 17:17:39.124989  233006 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:17:39.125038  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 17:17:39.153502  233006 logs.go:274] 3 containers: [8891a1b14e04 1c2c98a4c31a 371287b3c0c6]
	I1107 17:17:39.153554  233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
	I1107 17:17:39.153570  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
	I1107 17:17:39.193713  233006 logs.go:123] Gathering logs for kube-controller-manager [1c2c98a4c31a] ...
	I1107 17:17:39.193770  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2c98a4c31a"
	I1107 17:17:39.222940  233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
	I1107 17:17:39.222968  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
	I1107 17:17:39.264980  233006 logs.go:123] Gathering logs for Docker ...
	I1107 17:17:39.265019  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 17:17:39.306266  233006 logs.go:123] Gathering logs for kubelet ...
	I1107 17:17:39.306303  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 17:17:39.375563  233006 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:17:39.375608  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:17:39.446970  233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:17:39.446997  233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
	I1107 17:17:39.447010  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
	I1107 17:17:39.478856  233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
	I1107 17:17:39.478893  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
	I1107 17:17:39.551509  233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
	I1107 17:17:39.551552  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
	I1107 17:17:39.588201  233006 logs.go:123] Gathering logs for container status ...
	I1107 17:17:39.588235  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:17:39.622485  233006 logs.go:123] Gathering logs for dmesg ...
	I1107 17:17:39.622531  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:17:39.711503  233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
	I1107 17:17:39.711531  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
	I1107 17:17:39.746571  233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
	I1107 17:17:39.746605  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
	I1107 17:17:42.339399  233006 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1107 17:17:42.339827  233006 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1107 17:17:42.439058  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 17:17:42.465860  233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
	I1107 17:17:42.465945  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 17:17:42.503349  233006 logs.go:274] 1 containers: [6fec17665e36]
	I1107 17:17:42.503419  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 17:17:42.529180  233006 logs.go:274] 0 containers: []
	W1107 17:17:42.529209  233006 logs.go:276] No container was found matching "coredns"
	I1107 17:17:42.529272  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 17:17:42.556348  233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
	I1107 17:17:42.556424  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 17:17:42.585423  233006 logs.go:274] 0 containers: []
	W1107 17:17:42.585457  233006 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:17:42.585514  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 17:17:42.612694  233006 logs.go:274] 0 containers: []
	W1107 17:17:42.612730  233006 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:17:42.612806  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 17:17:42.638513  233006 logs.go:274] 0 containers: []
	W1107 17:17:42.638534  233006 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:17:42.638584  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 17:17:42.666063  233006 logs.go:274] 2 containers: [8891a1b14e04 371287b3c0c6]
	I1107 17:17:42.666121  233006 logs.go:123] Gathering logs for dmesg ...
	I1107 17:17:42.666139  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:17:42.683133  233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
	I1107 17:17:42.683163  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
	I1107 17:17:42.718461  233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
	I1107 17:17:42.718496  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
	I1107 17:17:42.752314  233006 logs.go:123] Gathering logs for Docker ...
	I1107 17:17:42.752340  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 17:17:42.774285  233006 logs.go:123] Gathering logs for container status ...
	I1107 17:17:42.774322  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:17:42.808596  233006 logs.go:123] Gathering logs for kubelet ...
	I1107 17:17:42.808627  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 17:17:42.886659  233006 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:17:42.886698  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:17:42.960618  233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:17:42.960656  233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
	I1107 17:17:42.960670  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
	I1107 17:17:43.002805  233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
	I1107 17:17:43.002858  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
	I1107 17:17:43.082429  233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
	I1107 17:17:43.082467  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
	I1107 17:17:43.115843  233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
	I1107 17:17:43.115911  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
	I1107 17:17:43.190735  233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
	I1107 17:17:43.190775  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
	I1107 17:17:40.229568  273963 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 17:17:40.229875  273963 start.go:159] libmachine.API.Create for "cilium-171301" (driver="docker")
	I1107 17:17:40.229916  273963 client.go:168] LocalClient.Create starting
	I1107 17:17:40.230045  273963 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem
	I1107 17:17:40.230090  273963 main.go:134] libmachine: Decoding PEM data...
	I1107 17:17:40.230115  273963 main.go:134] libmachine: Parsing certificate...
	I1107 17:17:40.230183  273963 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem
	I1107 17:17:40.230204  273963 main.go:134] libmachine: Decoding PEM data...
	I1107 17:17:40.230217  273963 main.go:134] libmachine: Parsing certificate...
	I1107 17:17:40.230581  273963 cli_runner.go:164] Run: docker network inspect cilium-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 17:17:40.255766  273963 cli_runner.go:211] docker network inspect cilium-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 17:17:40.255850  273963 network_create.go:272] running [docker network inspect cilium-171301] to gather additional debugging logs...
	I1107 17:17:40.255875  273963 cli_runner.go:164] Run: docker network inspect cilium-171301
	W1107 17:17:40.279408  273963 cli_runner.go:211] docker network inspect cilium-171301 returned with exit code 1
	I1107 17:17:40.279440  273963 network_create.go:275] error running [docker network inspect cilium-171301]: docker network inspect cilium-171301: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-171301
	I1107 17:17:40.279451  273963 network_create.go:277] output of [docker network inspect cilium-171301]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-171301
	
	** /stderr **
	I1107 17:17:40.279494  273963 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 17:17:40.309079  273963 network.go:246] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-aa8bc6b4377d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f9:4a:a0:7f}}
	I1107 17:17:40.309777  273963 network.go:246] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-46185e74412a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:46:c3:83:d6}}
	I1107 17:17:40.310466  273963 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0004bc5f8] misses:0}
	I1107 17:17:40.310501  273963 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 17:17:40.310513  273963 network_create.go:115] attempt to create docker network cilium-171301 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1107 17:17:40.310578  273963 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-171301 cilium-171301
	I1107 17:17:40.390589  273963 network_create.go:99] docker network cilium-171301 192.168.67.0/24 created
	I1107 17:17:40.390635  273963 kic.go:106] calculated static IP "192.168.67.2" for the "cilium-171301" container
	I1107 17:17:40.390704  273963 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 17:17:40.426276  273963 cli_runner.go:164] Run: docker volume create cilium-171301 --label name.minikube.sigs.k8s.io=cilium-171301 --label created_by.minikube.sigs.k8s.io=true
	I1107 17:17:40.452601  273963 oci.go:103] Successfully created a docker volume cilium-171301
	I1107 17:17:40.452735  273963 cli_runner.go:164] Run: docker run --rm --name cilium-171301-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-171301 --entrypoint /usr/bin/test -v cilium-171301:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1107 17:17:41.261517  273963 oci.go:107] Successfully prepared a docker volume cilium-171301
	I1107 17:17:41.261565  273963 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 17:17:41.261584  273963 kic.go:179] Starting extracting preloaded images to volume ...
	I1107 17:17:41.261639  273963 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-171301:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 17:17:44.552998  273963 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-171301:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (3.291298492s)
	I1107 17:17:44.553029  273963 kic.go:188] duration metric: took 3.291442 seconds to extract preloaded images to volume
	W1107 17:17:44.553206  273963 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 17:17:44.553333  273963 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 17:17:44.659014  273963 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-171301 --name cilium-171301 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-171301 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-171301 --network cilium-171301 --ip 192.168.67.2 --volume cilium-171301:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1107 17:17:40.678711  265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
	I1107 17:17:42.751499  265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
	I1107 17:17:45.178920  265599 pod_ready.go:92] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:45.178953  265599 pod_ready.go:81] duration metric: took 8.514203128s waiting for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:45.178969  265599 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:47.190344  265599 pod_ready.go:92] pod "etcd-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:47.190385  265599 pod_ready.go:81] duration metric: took 2.011408194s waiting for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:47.190401  265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.703190  265599 pod_ready.go:92] pod "kube-apiserver-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:48.703227  265599 pod_ready.go:81] duration metric: took 1.512816405s waiting for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.703241  265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.708302  265599 pod_ready.go:92] pod "kube-controller-manager-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:48.708326  265599 pod_ready.go:81] duration metric: took 5.077395ms waiting for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.708335  265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.713353  265599 pod_ready.go:92] pod "kube-proxy-627q2" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:48.713373  265599 pod_ready.go:81] duration metric: took 5.032187ms waiting for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.713382  265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.718276  265599 pod_ready.go:92] pod "kube-scheduler-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:48.718298  265599 pod_ready.go:81] duration metric: took 4.909784ms waiting for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.718308  265599 pod_ready.go:38] duration metric: took 12.058462568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:48.718326  265599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 17:17:48.725688  265599 ops.go:34] apiserver oom_adj: -16
	I1107 17:17:48.725713  265599 kubeadm.go:631] restartCluster took 23.70983267s
	I1107 17:17:48.725723  265599 kubeadm.go:398] StartCluster complete in 23.739715552s
	I1107 17:17:48.725742  265599 settings.go:142] acquiring lock: {Name:mke91789b0d6e4070893f671805542745cc27d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:48.725827  265599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15310-3679/kubeconfig
	I1107 17:17:48.727240  265599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/kubeconfig: {Name:mk0b702cd34f333a37178f1520735cf3ce85aad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:48.728367  265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 17:17:48.731431  265599 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-171530" rescaled to 1
	I1107 17:17:48.731509  265599 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 17:17:48.735381  265599 out.go:177] * Verifying Kubernetes components...
	I1107 17:17:45.728936  233006 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1107 17:17:45.729307  233006 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1107 17:17:45.938905  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 17:17:45.968231  233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
	I1107 17:17:45.968310  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 17:17:45.995241  233006 logs.go:274] 1 containers: [6fec17665e36]
	I1107 17:17:45.995316  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 17:17:46.024313  233006 logs.go:274] 0 containers: []
	W1107 17:17:46.024343  233006 logs.go:276] No container was found matching "coredns"
	I1107 17:17:46.024394  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 17:17:46.054216  233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
	I1107 17:17:46.054293  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 17:17:46.088627  233006 logs.go:274] 0 containers: []
	W1107 17:17:46.088662  233006 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:17:46.088710  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 17:17:46.116330  233006 logs.go:274] 0 containers: []
	W1107 17:17:46.116365  233006 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:17:46.116420  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 17:17:46.150637  233006 logs.go:274] 0 containers: []
	W1107 17:17:46.150668  233006 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:17:46.150771  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 17:17:46.182148  233006 logs.go:274] 2 containers: [8891a1b14e04 371287b3c0c6]
	I1107 17:17:46.182207  233006 logs.go:123] Gathering logs for dmesg ...
	I1107 17:17:46.182221  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:17:46.204275  233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
	I1107 17:17:46.204315  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
	I1107 17:17:46.244475  233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
	I1107 17:17:46.244515  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
	I1107 17:17:46.337500  233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
	I1107 17:17:46.337547  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
	I1107 17:17:46.384737  233006 logs.go:123] Gathering logs for Docker ...
	I1107 17:17:46.384774  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 17:17:46.405735  233006 logs.go:123] Gathering logs for container status ...
	I1107 17:17:46.405772  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:17:46.443740  233006 logs.go:123] Gathering logs for kubelet ...
	I1107 17:17:46.443780  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 17:17:46.515276  233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
	I1107 17:17:46.515311  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
	I1107 17:17:46.550260  233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
	I1107 17:17:46.550314  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
	I1107 17:17:46.632884  233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
	I1107 17:17:46.632921  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
	I1107 17:17:46.667751  233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
	I1107 17:17:46.667787  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
	I1107 17:17:46.701085  233006 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:17:46.701121  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:17:46.780102  233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:17:48.731563  265599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 17:17:48.731586  265599 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I1107 17:17:48.731727  265599 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:48.737019  265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:17:48.737075  265599 addons.go:65] Setting default-storageclass=true in profile "pause-171530"
	I1107 17:17:48.737103  265599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-171530"
	I1107 17:17:48.737073  265599 addons.go:65] Setting storage-provisioner=true in profile "pause-171530"
	I1107 17:17:48.737183  265599 addons.go:227] Setting addon storage-provisioner=true in "pause-171530"
	W1107 17:17:48.737191  265599 addons.go:236] addon storage-provisioner should already be in state true
	I1107 17:17:48.737247  265599 host.go:66] Checking if "pause-171530" exists ...
	I1107 17:17:48.737345  265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
	I1107 17:17:48.737690  265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
	I1107 17:17:48.748838  265599 node_ready.go:35] waiting up to 6m0s for node "pause-171530" to be "Ready" ...
	I1107 17:17:48.755501  265599 node_ready.go:49] node "pause-171530" has status "Ready":"True"
	I1107 17:17:48.755530  265599 node_ready.go:38] duration metric: took 6.650143ms waiting for node "pause-171530" to be "Ready" ...
	I1107 17:17:48.755544  265599 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:48.774070  265599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:17:45.119361  273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Running}}
	I1107 17:17:45.160545  273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Status}}
	I1107 17:17:45.191402  273963 cli_runner.go:164] Run: docker exec cilium-171301 stat /var/lib/dpkg/alternatives/iptables
	I1107 17:17:45.267825  273963 oci.go:144] the created container "cilium-171301" has a running status.
	I1107 17:17:45.267856  273963 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa...
	I1107 17:17:45.381762  273963 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 17:17:45.520399  273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Status}}
	I1107 17:17:45.581314  273963 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 17:17:45.581340  273963 kic_runner.go:114] Args: [docker exec --privileged cilium-171301 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 17:17:45.671973  273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Status}}
	I1107 17:17:45.703596  273963 machine.go:88] provisioning docker machine ...
	I1107 17:17:45.703639  273963 ubuntu.go:169] provisioning hostname "cilium-171301"
	I1107 17:17:45.703689  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:45.732869  273963 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:45.733123  273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I1107 17:17:45.733143  273963 main.go:134] libmachine: About to run SSH command:
	sudo hostname cilium-171301 && echo "cilium-171301" | sudo tee /etc/hostname
	I1107 17:17:45.878648  273963 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-171301
	
	I1107 17:17:45.878766  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:45.906394  273963 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:45.906551  273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I1107 17:17:45.906570  273963 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-171301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-171301/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-171301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 17:17:46.027393  273963 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 17:17:46.027440  273963 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15310-3679/.minikube CaCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15310-3679/.minikube}
	I1107 17:17:46.027464  273963 ubuntu.go:177] setting up certificates
	I1107 17:17:46.027474  273963 provision.go:83] configureAuth start
	I1107 17:17:46.027538  273963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-171301
	I1107 17:17:46.061281  273963 provision.go:138] copyHostCerts
	I1107 17:17:46.061348  273963 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem, removing ...
	I1107 17:17:46.061366  273963 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem
	I1107 17:17:46.061441  273963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem (1082 bytes)
	I1107 17:17:46.061560  273963 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem, removing ...
	I1107 17:17:46.061575  273963 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem
	I1107 17:17:46.061617  273963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem (1123 bytes)
	I1107 17:17:46.061749  273963 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem, removing ...
	I1107 17:17:46.061764  273963 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem
	I1107 17:17:46.061801  273963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem (1675 bytes)
	I1107 17:17:46.061863  273963 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem org=jenkins.cilium-171301 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-171301]
	I1107 17:17:46.253924  273963 provision.go:172] copyRemoteCerts
	I1107 17:17:46.253999  273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 17:17:46.254047  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:46.296985  273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
	I1107 17:17:46.384442  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 17:17:46.404309  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1107 17:17:46.427506  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 17:17:46.449504  273963 provision.go:86] duration metric: configureAuth took 422.011748ms
	I1107 17:17:46.449540  273963 ubuntu.go:193] setting minikube options for container-runtime
	I1107 17:17:46.449738  273963 config.go:180] Loaded profile config "cilium-171301": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:46.449813  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:46.481398  273963 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:46.481541  273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I1107 17:17:46.481555  273963 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 17:17:46.599328  273963 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 17:17:46.599354  273963 ubuntu.go:71] root file system type: overlay
	I1107 17:17:46.599539  273963 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 17:17:46.599598  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:46.629056  273963 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:46.629241  273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I1107 17:17:46.629343  273963 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 17:17:46.770161  273963 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 17:17:46.770248  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:46.799041  273963 main.go:134] libmachine: Using SSH client type: native
	I1107 17:17:46.799188  273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49384 <nil> <nil>}
	I1107 17:17:46.799207  273963 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 17:17:47.547232  273963 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 17:17:46.766442749 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1107 17:17:47.547272  273963 machine.go:91] provisioned docker machine in 1.84364984s
	I1107 17:17:47.547283  273963 client.go:171] LocalClient.Create took 7.317360133s
	I1107 17:17:47.547304  273963 start.go:167] duration metric: libmachine.API.Create for "cilium-171301" took 7.317430541s
	I1107 17:17:47.547312  273963 start.go:300] post-start starting for "cilium-171301" (driver="docker")
	I1107 17:17:47.547320  273963 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 17:17:47.547382  273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 17:17:47.547424  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:47.580680  273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
	I1107 17:17:47.670961  273963 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 17:17:47.674334  273963 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 17:17:47.674370  273963 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 17:17:47.674379  273963 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 17:17:47.674385  273963 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 17:17:47.674395  273963 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/addons for local assets ...
	I1107 17:17:47.674457  273963 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/files for local assets ...
	I1107 17:17:47.674531  273963 filesync.go:149] local asset: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem -> 101292.pem in /etc/ssl/certs
	I1107 17:17:47.674630  273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 17:17:47.682576  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /etc/ssl/certs/101292.pem (1708 bytes)
	I1107 17:17:47.702345  273963 start.go:303] post-start completed in 155.016776ms
	I1107 17:17:47.702863  273963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-171301
	I1107 17:17:47.729269  273963 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/config.json ...
	I1107 17:17:47.729653  273963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 17:17:47.729754  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:47.754933  273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
	I1107 17:17:47.839677  273963 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 17:17:47.843908  273963 start.go:128] duration metric: createHost completed in 7.617038008s
	I1107 17:17:47.843931  273963 start.go:83] releasing machines lock for "cilium-171301", held for 7.617622807s
	I1107 17:17:47.844011  273963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-171301
	I1107 17:17:47.870280  273963 ssh_runner.go:195] Run: systemctl --version
	I1107 17:17:47.870346  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:47.870364  273963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 17:17:47.870434  273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
	I1107 17:17:47.897797  273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
	I1107 17:17:47.898053  273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
	I1107 17:17:48.013979  273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 17:17:48.022299  273963 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1107 17:17:48.037257  273963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:17:48.110172  273963 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1107 17:17:48.198655  273963 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 17:17:48.210409  273963 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 17:17:48.210475  273963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 17:17:48.222331  273963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 17:17:48.238231  273963 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 17:17:48.324359  273963 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 17:17:48.401465  273963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:17:48.479636  273963 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 17:17:48.709599  273963 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 17:17:48.829234  273963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:17:48.915216  273963 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 17:17:48.926795  273963 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 17:17:48.926878  273963 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 17:17:48.930979  273963 start.go:472] Will wait 60s for crictl version
	I1107 17:17:48.931044  273963 ssh_runner.go:195] Run: sudo crictl version
	I1107 17:17:48.968172  273963 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 17:17:48.968235  273963 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 17:17:49.004145  273963 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 17:17:48.776053  265599 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 17:17:48.776086  265599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 17:17:48.776141  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:48.780418  265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 17:17:48.783994  265599 addons.go:227] Setting addon default-storageclass=true in "pause-171530"
	W1107 17:17:48.784033  265599 addons.go:236] addon default-storageclass should already be in state true
	I1107 17:17:48.784066  265599 host.go:66] Checking if "pause-171530" exists ...
	I1107 17:17:48.784533  265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
	I1107 17:17:48.791755  265599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:48.827118  265599 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 17:17:48.827146  265599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 17:17:48.827202  265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
	I1107 17:17:48.832614  265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
	I1107 17:17:48.844192  265599 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 17:17:48.858350  265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
	I1107 17:17:48.935269  265599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 17:17:48.958923  265599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 17:17:49.187938  265599 pod_ready.go:92] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:49.187970  265599 pod_ready.go:81] duration metric: took 396.174585ms waiting for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.187985  265599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.588753  265599 pod_ready.go:92] pod "etcd-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:49.588785  265599 pod_ready.go:81] duration metric: took 400.791096ms waiting for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.588799  265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.758403  265599 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1107 17:17:49.040144  273963 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 17:17:49.040219  273963 cli_runner.go:164] Run: docker network inspect cilium-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 17:17:49.069531  273963 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1107 17:17:49.072992  273963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 17:17:49.083058  273963 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 17:17:49.083116  273963 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 17:17:49.107581  273963 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 17:17:49.107611  273963 docker.go:543] Images already preloaded, skipping extraction
	I1107 17:17:49.107668  273963 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 17:17:49.133204  273963 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 17:17:49.133245  273963 cache_images.go:84] Images are preloaded, skipping loading
	I1107 17:17:49.133295  273963 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 17:17:49.206522  273963 cni.go:95] Creating CNI manager for "cilium"
	I1107 17:17:49.206553  273963 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 17:17:49.206574  273963 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-171301 NodeName:cilium-171301 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 17:17:49.206774  273963 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "cilium-171301"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 17:17:49.206866  273963 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cilium-171301 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I1107 17:17:49.206924  273963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 17:17:49.215024  273963 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 17:17:49.215106  273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 17:17:49.223091  273963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
	I1107 17:17:49.237727  273963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 17:17:49.251298  273963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
	I1107 17:17:49.265109  273963 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1107 17:17:49.268700  273963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 17:17:49.278537  273963 certs.go:54] Setting up /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301 for IP: 192.168.67.2
	I1107 17:17:49.278656  273963 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key
	I1107 17:17:49.278710  273963 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key
	I1107 17:17:49.278784  273963 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.key
	I1107 17:17:49.278798  273963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt with IP's: []
	I1107 17:17:49.377655  273963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt ...
	I1107 17:17:49.377689  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: {Name:mk85045205a0f3cc9db16d3ba4384eb58e4d4170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:49.377932  273963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.key ...
	I1107 17:17:49.377950  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.key: {Name:mk22ddbbc0c35976a622861a2537590ceb2c3529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:49.378071  273963 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e
	I1107 17:17:49.378101  273963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 17:17:49.717401  273963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e ...
	I1107 17:17:49.717449  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e: {Name:mk1d0b418ed1d3c777ce02b789369b0a0920bca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:49.717668  273963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e ...
	I1107 17:17:49.717686  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e: {Name:mkad3745d4acb3a4df279ae7d626aaef591fc7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:49.717800  273963 certs.go:320] copying /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt
	I1107 17:17:49.717875  273963 certs.go:324] copying /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key
	I1107 17:17:49.717938  273963 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key
	I1107 17:17:49.717957  273963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt with IP's: []
	I1107 17:17:49.788111  273963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt ...
	I1107 17:17:49.788144  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt: {Name:mk4ef43b9fbc1a2c60e066e8c2245294f6e4a088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:49.788346  273963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key ...
	I1107 17:17:49.788363  273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key: {Name:mk3536bb270258df328f9904013708493e9e5cd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:49.788581  273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem (1338 bytes)
	W1107 17:17:49.788630  273963 certs.go:384] ignoring /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129_empty.pem, impossibly tiny 0 bytes
	I1107 17:17:49.788648  273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 17:17:49.788683  273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem (1082 bytes)
	I1107 17:17:49.788717  273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem (1123 bytes)
	I1107 17:17:49.788750  273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem (1675 bytes)
	I1107 17:17:49.788805  273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem (1708 bytes)
	I1107 17:17:49.789402  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 17:17:49.809402  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 17:17:49.828363  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 17:17:49.851556  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 17:17:49.875238  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 17:17:49.895507  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 17:17:49.917493  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 17:17:49.938898  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 17:17:49.958074  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 17:17:49.976967  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem --> /usr/share/ca-certificates/10129.pem (1338 bytes)
	I1107 17:17:49.997249  273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /usr/share/ca-certificates/101292.pem (1708 bytes)
	I1107 17:17:50.022620  273963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 17:17:50.037986  273963 ssh_runner.go:195] Run: openssl version
	I1107 17:17:50.043912  273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10129.pem && ln -fs /usr/share/ca-certificates/10129.pem /etc/ssl/certs/10129.pem"
	I1107 17:17:50.052548  273963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10129.pem
	I1107 17:17:50.056053  273963 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/10129.pem
	I1107 17:17:50.056137  273963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10129.pem
	I1107 17:17:50.061307  273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10129.pem /etc/ssl/certs/51391683.0"
	I1107 17:17:50.069615  273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101292.pem && ln -fs /usr/share/ca-certificates/101292.pem /etc/ssl/certs/101292.pem"
	I1107 17:17:50.079805  273963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101292.pem
	I1107 17:17:50.084296  273963 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/101292.pem
	I1107 17:17:50.084356  273963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101292.pem
	I1107 17:17:50.090328  273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101292.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 17:17:50.099164  273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 17:17:50.110113  273963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:17:50.114343  273963 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:17:50.114408  273963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:17:50.120637  273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 17:17:50.130809  273963 kubeadm.go:396] StartCluster: {Name:cilium-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:17:50.130955  273963 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 17:17:50.158917  273963 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 17:17:50.166269  273963 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 17:17:50.174871  273963 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 17:17:50.174936  273963 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:17:50.184105  273963 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 17:17:50.184164  273963 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 17:17:50.239005  273963 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1107 17:17:50.239098  273963 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 17:17:50.279571  273963 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1107 17:17:50.279660  273963 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1107 17:17:50.279716  273963 kubeadm.go:317] OS: Linux
	I1107 17:17:50.279780  273963 kubeadm.go:317] CGROUPS_CPU: enabled
	I1107 17:17:50.279825  273963 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1107 17:17:50.279866  273963 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1107 17:17:50.279907  273963 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1107 17:17:50.279948  273963 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1107 17:17:50.279989  273963 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1107 17:17:50.280029  273963 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1107 17:17:50.280070  273963 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1107 17:17:50.280109  273963 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1107 17:17:50.359738  273963 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 17:17:50.359870  273963 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 17:17:50.359983  273963 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 17:17:50.504499  273963 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 17:17:49.760036  265599 addons.go:488] enableAddons completed in 1.028452371s
	I1107 17:17:49.988064  265599 pod_ready.go:92] pod "kube-apiserver-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:49.988085  265599 pod_ready.go:81] duration metric: took 399.27917ms waiting for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:49.988096  265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:50.387943  265599 pod_ready.go:92] pod "kube-controller-manager-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:50.387964  265599 pod_ready.go:81] duration metric: took 399.861996ms waiting for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:50.387975  265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:50.787240  265599 pod_ready.go:92] pod "kube-proxy-627q2" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:50.787266  265599 pod_ready.go:81] duration metric: took 399.283504ms waiting for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:50.787279  265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:51.187853  265599 pod_ready.go:92] pod "kube-scheduler-pause-171530" in "kube-system" namespace has status "Ready":"True"
	I1107 17:17:51.187885  265599 pod_ready.go:81] duration metric: took 400.597643ms waiting for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
	I1107 17:17:51.187896  265599 pod_ready.go:38] duration metric: took 2.432339677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:17:51.187921  265599 api_server.go:51] waiting for apiserver process to appear ...
	I1107 17:17:51.187970  265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:17:51.198604  265599 api_server.go:71] duration metric: took 2.467050632s to wait for apiserver process to appear ...
	I1107 17:17:51.198640  265599 api_server.go:87] waiting for apiserver healthz status ...
	I1107 17:17:51.198650  265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1107 17:17:51.203228  265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1107 17:17:51.204215  265599 api_server.go:140] control plane version: v1.25.3
	I1107 17:17:51.204244  265599 api_server.go:130] duration metric: took 5.597242ms to wait for apiserver health ...
	I1107 17:17:51.204255  265599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 17:17:51.389884  265599 system_pods.go:59] 7 kube-system pods found
	I1107 17:17:51.389918  265599 system_pods.go:61] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running
	I1107 17:17:51.389923  265599 system_pods.go:61] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running
	I1107 17:17:51.389927  265599 system_pods.go:61] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running
	I1107 17:17:51.389932  265599 system_pods.go:61] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running
	I1107 17:17:51.389936  265599 system_pods.go:61] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running
	I1107 17:17:51.389940  265599 system_pods.go:61] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
	I1107 17:17:51.389944  265599 system_pods.go:61] "storage-provisioner" [225d8eea-c00a-46a3-8b89-abb34458db76] Running
	I1107 17:17:51.389949  265599 system_pods.go:74] duration metric: took 185.688763ms to wait for pod list to return data ...
	I1107 17:17:51.389958  265599 default_sa.go:34] waiting for default service account to be created ...
	I1107 17:17:51.587856  265599 default_sa.go:45] found service account: "default"
	I1107 17:17:51.587885  265599 default_sa.go:55] duration metric: took 197.921282ms for default service account to be created ...
	I1107 17:17:51.587896  265599 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 17:17:51.791610  265599 system_pods.go:86] 7 kube-system pods found
	I1107 17:17:51.791656  265599 system_pods.go:89] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running
	I1107 17:17:51.791666  265599 system_pods.go:89] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running
	I1107 17:17:51.791683  265599 system_pods.go:89] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running
	I1107 17:17:51.791692  265599 system_pods.go:89] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running
	I1107 17:17:51.791699  265599 system_pods.go:89] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running
	I1107 17:17:51.791707  265599 system_pods.go:89] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
	I1107 17:17:51.791717  265599 system_pods.go:89] "storage-provisioner" [225d8eea-c00a-46a3-8b89-abb34458db76] Running
	I1107 17:17:51.791725  265599 system_pods.go:126] duration metric: took 203.823982ms to wait for k8s-apps to be running ...
	I1107 17:17:51.791734  265599 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 17:17:51.791785  265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:17:51.802112  265599 system_svc.go:56] duration metric: took 10.369415ms WaitForService to wait for kubelet.
	I1107 17:17:51.802147  265599 kubeadm.go:573] duration metric: took 3.070599627s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 17:17:51.802170  265599 node_conditions.go:102] verifying NodePressure condition ...
	I1107 17:17:51.987329  265599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 17:17:51.987365  265599 node_conditions.go:123] node cpu capacity is 8
	I1107 17:17:51.987379  265599 node_conditions.go:105] duration metric: took 185.202183ms to run NodePressure ...
	I1107 17:17:51.987392  265599 start.go:217] waiting for startup goroutines ...
	I1107 17:17:51.987763  265599 ssh_runner.go:195] Run: rm -f paused
	I1107 17:17:52.043023  265599 start.go:506] kubectl: 1.25.3, cluster: 1.25.3 (minor skew: 0)
	I1107 17:17:52.045707  265599 out.go:177] * Done! kubectl is now configured to use "pause-171530" cluster and "default" namespace by default
	I1107 17:17:50.507106  273963 out.go:204]   - Generating certificates and keys ...
	I1107 17:17:50.507263  273963 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 17:17:50.507377  273963 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 17:17:50.666684  273963 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 17:17:50.780542  273963 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1107 17:17:50.844552  273963 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1107 17:17:50.965350  273963 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1107 17:17:51.084839  273963 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1107 17:17:51.084994  273963 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [cilium-171301 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1107 17:17:51.308472  273963 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1107 17:17:51.308615  273963 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [cilium-171301 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1107 17:17:51.778235  273963 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 17:17:52.391061  273963 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 17:17:52.518001  273963 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1107 17:17:52.518138  273963 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 17:17:52.701867  273963 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 17:17:52.811971  273963 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 17:17:53.225312  273963 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 17:17:53.274661  273963 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 17:17:53.287337  273963 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 17:17:53.288459  273963 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 17:17:53.288545  273963 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1107 17:17:53.394876  273963 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 17:17:49.280257  233006 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1107 17:17:49.280620  233006 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1107 17:17:49.439027  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 17:17:49.464725  233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
	I1107 17:17:49.464794  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 17:17:49.487632  233006 logs.go:274] 1 containers: [6fec17665e36]
	I1107 17:17:49.487702  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 17:17:49.515626  233006 logs.go:274] 0 containers: []
	W1107 17:17:49.515655  233006 logs.go:276] No container was found matching "coredns"
	I1107 17:17:49.515712  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 17:17:49.544438  233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
	I1107 17:17:49.544516  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 17:17:49.573888  233006 logs.go:274] 0 containers: []
	W1107 17:17:49.573916  233006 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:17:49.573964  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 17:17:49.600751  233006 logs.go:274] 0 containers: []
	W1107 17:17:49.600780  233006 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:17:49.600853  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 17:17:49.629515  233006 logs.go:274] 0 containers: []
	W1107 17:17:49.629547  233006 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:17:49.629601  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 17:17:49.660954  233006 logs.go:274] 2 containers: [8891a1b14e04 371287b3c0c6]
	I1107 17:17:49.661005  233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
	I1107 17:17:49.661019  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
	I1107 17:17:49.703297  233006 logs.go:123] Gathering logs for container status ...
	I1107 17:17:49.703332  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:17:49.742169  233006 logs.go:123] Gathering logs for kubelet ...
	I1107 17:17:49.742205  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 17:17:49.813899  233006 logs.go:123] Gathering logs for dmesg ...
	I1107 17:17:49.813936  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:17:49.830714  233006 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:17:49.830758  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:17:49.899172  233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:17:49.899199  233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
	I1107 17:17:49.899211  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
	I1107 17:17:49.976394  233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
	I1107 17:17:49.976437  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
	I1107 17:17:50.052769  233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
	I1107 17:17:50.052802  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
	I1107 17:17:50.086254  233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
	I1107 17:17:50.086283  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
	I1107 17:17:50.119937  233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
	I1107 17:17:50.119972  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
	I1107 17:17:50.156488  233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
	I1107 17:17:50.156536  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
	I1107 17:17:50.186320  233006 logs.go:123] Gathering logs for Docker ...
	I1107 17:17:50.186346  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 17:17:52.707667  233006 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1107 17:17:52.708037  233006 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1107 17:17:52.938410  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 17:17:52.965686  233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
	I1107 17:17:52.965766  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 17:17:52.992750  233006 logs.go:274] 1 containers: [6fec17665e36]
	I1107 17:17:52.992825  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 17:17:53.017709  233006 logs.go:274] 0 containers: []
	W1107 17:17:53.017733  233006 logs.go:276] No container was found matching "coredns"
	I1107 17:17:53.017788  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 17:17:53.045447  233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
	I1107 17:17:53.045524  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 17:17:53.071606  233006 logs.go:274] 0 containers: []
	W1107 17:17:53.071635  233006 logs.go:276] No container was found matching "kube-proxy"
	I1107 17:17:53.071688  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1107 17:17:53.095002  233006 logs.go:274] 0 containers: []
	W1107 17:17:53.095032  233006 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1107 17:17:53.095090  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1107 17:17:53.122895  233006 logs.go:274] 0 containers: []
	W1107 17:17:53.122919  233006 logs.go:276] No container was found matching "storage-provisioner"
	I1107 17:17:53.122971  233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 17:17:53.148541  233006 logs.go:274] 2 containers: [8891a1b14e04 371287b3c0c6]
	I1107 17:17:53.148583  233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
	I1107 17:17:53.148594  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
	I1107 17:17:53.181466  233006 logs.go:123] Gathering logs for Docker ...
	I1107 17:17:53.181503  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1107 17:17:53.203825  233006 logs.go:123] Gathering logs for describe nodes ...
	I1107 17:17:53.203856  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 17:17:53.269885  233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 17:17:53.269910  233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
	I1107 17:17:53.269921  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
	I1107 17:17:53.309836  233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
	I1107 17:17:53.309876  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
	I1107 17:17:53.397994  233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
	I1107 17:17:53.398034  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
	I1107 17:17:53.434553  233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
	I1107 17:17:53.434595  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
	I1107 17:17:53.515012  233006 logs.go:123] Gathering logs for kubelet ...
	I1107 17:17:53.515049  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 17:17:53.590837  233006 logs.go:123] Gathering logs for dmesg ...
	I1107 17:17:53.590881  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 17:17:53.608621  233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
	I1107 17:17:53.608659  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
	I1107 17:17:53.640909  233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
	I1107 17:17:53.640937  233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
	I1107 17:17:53.684459  233006 logs.go:123] Gathering logs for container status ...
	I1107 17:17:53.684503  233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 17:17:53.396707  273963 out.go:204]   - Booting up control plane ...
	I1107 17:17:53.396844  273963 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 17:17:53.398788  273963 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 17:17:53.400416  273963 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 17:17:53.402210  273963 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 17:17:53.404604  273963 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-11-07 17:15:39 UTC, end at Mon 2022-11-07 17:17:55 UTC. --
	Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.867503766Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 9b7990a4868df38640a1a4b501d3861a71b30b34429e7e3c19b6f85cd55e5664 708aac62fe16d29b27b7e03823a98eca3e1f022eaaaae07b03b614462c34f61c], retrying...."
	Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.947874677Z" level=info msg="Removing stale sandbox 08bdce8089e979563c1c35fc2b9cb00ca97ae33cb7c45028d6147314b55324da (6d1abd3e30d792833852b3f43c7effc3075f17e2807dee93ee5437621536102e)"
	Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.949913639Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 9b7990a4868df38640a1a4b501d3861a71b30b34429e7e3c19b6f85cd55e5664 f88327d8868c8ad0f7411a8b72ba2baa71bca468214ef9b295ee84ffe8afcc29], retrying...."
	Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.982329624Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.027920216Z" level=info msg="Loading containers: done."
	Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.040588525Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.040673581Z" level=info msg="Daemon has completed initialization"
	Nov 07 17:17:24 pause-171530 systemd[1]: Started Docker Application Container Engine.
	Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.057458789Z" level=info msg="API listen on [::]:2376"
	Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.061113478Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.523817703Z" level=info msg="ignoring event" container=7c093d736ba0305191d4e798ca0d308583b1c7463ad986b23c2d186951b7d0ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.529123877Z" level=info msg="ignoring event" container=42f2c39561b11166e1cca511011d19541e07606bda37d3d78a6b8d6324edba56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.531551274Z" level=info msg="ignoring event" container=c109021f97b0ec6487f090af18a20062a7df3c8845d39ce8fa8a5e3494da80ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.536407618Z" level=info msg="ignoring event" container=bc4811d3f9f168bfaa9567e8b88953c21b5104eba6a02a534a6b32e074d9a512 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.536463887Z" level=info msg="ignoring event" container=c9629a7195e0926d21d4aebeb78f3778a8379562c623cac143cfd8764639c395 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.537822290Z" level=info msg="ignoring event" container=cdc8d9ab8c016ad1726c8ec69dafffa0822704571646314f8f002d64229b9dcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.654013442Z" level=error msg="stream copy error: reading from a closed fifo"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.660623628Z" level=error msg="stream copy error: reading from a closed fifo"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.668658992Z" level=error msg="404d7bd895c853d22c917ec8770367d7a91dafd370c7b8959c3253e584e1eb5d cleanup: failed to delete container from containerd: no such container"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.671711028Z" level=error msg="9dc3075461e2264f083ac8045d0398e1cb1b95857a3a65126bf2c8178945eb02 cleanup: failed to delete container from containerd: no such container"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.683178370Z" level=error msg="d4737d2c0cc12722054c6a67e64adfcb09ac5d35405d5f62738a911f119801f2 cleanup: failed to delete container from containerd: no such container"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.730178512Z" level=error msg="1ca6e9485fa8aaf7657cec34a2aafba49fda2fe8d446b8f44f511ca7746e1c0d cleanup: failed to delete container from containerd: no such container"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.730223095Z" level=error msg="Handler for POST /v1.40/containers/1ca6e9485fa8aaf7657cec34a2aafba49fda2fe8d446b8f44f511ca7746e1c0d/start returned error: can't join IPC of container bc4811d3f9f168bfaa9567e8b88953c21b5104eba6a02a534a6b32e074d9a512: container bc4811d3f9f168bfaa9567e8b88953c21b5104eba6a02a534a6b32e074d9a512 is not running"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.734363189Z" level=error msg="ca313e60699e88a95aade29a7a771b01943787674653d827c9ac778c304b7ee2 cleanup: failed to delete container from containerd: no such container"
	Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.889125639Z" level=error msg="b6069c474d48724ad6405cac869a299021de19f0e83735250a6669e95f84de98 cleanup: failed to delete container from containerd: no such container"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	4e27fc3536146       6e38f40d628db       5 seconds ago       Running             storage-provisioner       0                   869229924f7b0
	2678c07441af4       5185b96f0becf       19 seconds ago      Running             coredns                   2                   c8ae9930fd89e
	d128588b435c4       beaaf00edd38a       20 seconds ago      Running             kube-proxy                3                   308cd3b6261d9
	fa1fae9e3dd4c       6d23ec0e8b87e       24 seconds ago      Running             kube-scheduler            3                   499d52ff7ec2d
	9a2c93b7807eb       0346dbd74bcb9       24 seconds ago      Running             kube-apiserver            3                   ca7019d32208a
	c617e5f72b7e0       6039992312758       24 seconds ago      Running             kube-controller-manager   3                   b2be7ef781078
	240c58d21dba8       a8a176a5d5d69       24 seconds ago      Running             etcd                      3                   af4dddaaaab51
	b6069c474d487       5185b96f0becf       27 seconds ago      Created             coredns                   1                   cdc8d9ab8c016
	9dc3075461e22       0346dbd74bcb9       27 seconds ago      Created             kube-apiserver            2                   c109021f97b0e
	404d7bd895c85       6039992312758       27 seconds ago      Created             kube-controller-manager   2                   7c093d736ba03
	ca313e60699e8       6d23ec0e8b87e       27 seconds ago      Created             kube-scheduler            2                   42f2c39561b11
	1ca6e9485fa8a       a8a176a5d5d69       27 seconds ago      Created             etcd                      2                   bc4811d3f9f16
	d4737d2c0cc12       beaaf00edd38a       27 seconds ago      Created             kube-proxy                2                   c9629a7195e09
	
	* 
	* ==> coredns [2678c07441af] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = f3fde9de6486f59fe260f641c8b45d450960379ea9d73a7fef0c1feac6c746730bd77c72d2092518703e00d94c78d1eec0c6cb3efcd4dc489238241cea4bf436
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> coredns [b6069c474d48] <==
	* 
	* 
	* ==> describe nodes <==
	* Name:               pause-171530
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-171530
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262
	                    minikube.k8s.io/name=pause-171530
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_11_07T17_16_00_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Nov 2022 17:15:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-171530
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Nov 2022 17:17:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Nov 2022 17:17:35 +0000   Mon, 07 Nov 2022 17:15:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Nov 2022 17:17:35 +0000   Mon, 07 Nov 2022 17:15:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Nov 2022 17:17:35 +0000   Mon, 07 Nov 2022 17:15:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Nov 2022 17:17:35 +0000   Mon, 07 Nov 2022 17:17:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-171530
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 996614ec4c814b87b7ec8ebee3d0e8c9
	  System UUID:                584d8003-5974-4bad-ab15-c1a6d30346fa
	  Boot ID:                    08dd20cb-78b6-4f23-8a31-d42df46571b3
	  Kernel Version:             5.15.0-1021-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.20
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-565d847f94-r6gbf                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     103s
	  kube-system                 etcd-pause-171530                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         115s
	  kube-system                 kube-apiserver-pause-171530             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-controller-manager-pause-171530    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-proxy-627q2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-scheduler-pause-171530             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  Starting                 19s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m7s (x4 over 2m7s)  kubelet          Node pause-171530 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m7s (x4 over 2m7s)  kubelet          Node pause-171530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m7s (x4 over 2m7s)  kubelet          Node pause-171530 status is now: NodeHasSufficientMemory
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s                 kubelet          Node pause-171530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s                 kubelet          Node pause-171530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s                 kubelet          Node pause-171530 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             115s                 kubelet          Node pause-171530 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                105s                 kubelet          Node pause-171530 status is now: NodeReady
	  Normal  RegisteredNode           103s                 node-controller  Node pause-171530 event: Registered Node pause-171530 in Controller
	  Normal  Starting                 25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)    kubelet          Node pause-171530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)    kubelet          Node pause-171530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)    kubelet          Node pause-171530 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                   node-controller  Node pause-171530 event: Registered Node pause-171530 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.004797] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006797] FS-Cache: O-cookie d=00000000b1e64776{9p.inode} n=0000000007b82556
	[  +0.007369] FS-Cache: O-key=[8] '7fa00f0200000000'
	[  +0.004936] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006594] FS-Cache: N-cookie d=00000000b1e64776{9p.inode} n=000000001524e9eb
	[  +0.008729] FS-Cache: N-key=[8] '7fa00f0200000000'
	[  +0.488901] FS-Cache: Duplicate cookie detected
	[  +0.004717] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006779] FS-Cache: O-cookie d=00000000b1e64776{9p.inode} n=000000004d15690e
	[  +0.007381] FS-Cache: O-key=[8] '8ea00f0200000000'
	[  +0.004952] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006607] FS-Cache: N-cookie d=00000000b1e64776{9p.inode} n=00000000470ffc24
	[  +0.008833] FS-Cache: N-key=[8] '8ea00f0200000000'
	[Nov 7 16:54] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 7 17:05] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
	[  +0.000007] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
	[  +1.008285] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
	[  +0.000005] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
	[  +2.011837] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
	[  +0.000035] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
	[Nov 7 17:06] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
	[  +0.000011] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
	[  +8.191212] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
	[  +0.000044] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
	[Nov 7 17:14] process 'docker/tmp/qemu-check072764330/check' started with executable stack
	
	* 
	* ==> etcd [1ca6e9485fa8] <==
	* 
	* 
	* ==> etcd [240c58d21dba] <==
	* {"level":"info","ts":"2022-11-07T17:17:31.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2022-11-07T17:17:31.626Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2022-11-07T17:17:31.626Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-07T17:17:31.626Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 3"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 3"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 4"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 4"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-171530 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-11-07T17:17:33.044Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2022-11-07T17:17:33.044Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-11-07T17:17:43.326Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"152.41473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-565d847f94-r6gbf\" ","response":"range_response_count:1 size:5038"}
	{"level":"info","ts":"2022-11-07T17:17:43.326Z","caller":"traceutil/trace.go:171","msg":"trace[1276518897] range","detail":"{range_begin:/registry/pods/kube-system/coredns-565d847f94-r6gbf; range_end:; response_count:1; response_revision:452; }","duration":"152.549915ms","start":"2022-11-07T17:17:43.174Z","end":"2022-11-07T17:17:43.326Z","steps":["trace[1276518897] 'agreement among raft nodes before linearized reading'  (duration: 40.877163ms)","trace[1276518897] 'range keys from in-memory index tree'  (duration: 111.462423ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  17:17:55 up  1:00,  0 users,  load average: 3.46, 3.53, 2.55
	Linux pause-171530 5.15.0-1021-gcp #28~20.04.1-Ubuntu SMP Mon Oct 17 11:37:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [9a2c93b7807e] <==
	* I1107 17:17:34.912107       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1107 17:17:34.912439       1 controller.go:83] Starting OpenAPI AggregationController
	I1107 17:17:34.912469       1 available_controller.go:491] Starting AvailableConditionController
	I1107 17:17:34.912477       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I1107 17:17:34.912134       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1107 17:17:34.912451       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I1107 17:17:34.920676       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1107 17:17:34.912711       1 controller.go:85] Starting OpenAPI controller
	I1107 17:17:35.019428       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1107 17:17:35.019719       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1107 17:17:35.020233       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1107 17:17:35.019789       1 cache.go:39] Caches are synced for autoregister controller
	I1107 17:17:35.020532       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1107 17:17:35.020562       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I1107 17:17:35.021059       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1107 17:17:35.038005       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 17:17:35.688505       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1107 17:17:35.915960       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1107 17:17:36.540683       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1107 17:17:36.550675       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1107 17:17:36.580888       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1107 17:17:36.641282       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 17:17:36.648284       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1107 17:17:47.967954       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1107 17:17:48.027205       1 controller.go:616] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [9dc3075461e2] <==
	* 
	* 
	* ==> kube-controller-manager [404d7bd895c8] <==
	* 
	* 
	* ==> kube-controller-manager [c617e5f72b7e] <==
	* I1107 17:17:48.008590       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I1107 17:17:48.008778       1 event.go:294] "Event occurred" object="pause-171530" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-171530 event: Registered Node pause-171530 in Controller"
	I1107 17:17:48.008734       1 taint_manager.go:209] "Sending events to api server"
	W1107 17:17:48.008888       1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-171530. Assuming now as a timestamp.
	I1107 17:17:48.008920       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I1107 17:17:48.019368       1 shared_informer.go:262] Caches are synced for namespace
	I1107 17:17:48.020195       1 shared_informer.go:262] Caches are synced for node
	I1107 17:17:48.020224       1 range_allocator.go:166] Starting range CIDR allocator
	I1107 17:17:48.020230       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1107 17:17:48.020269       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1107 17:17:48.022046       1 shared_informer.go:262] Caches are synced for expand
	I1107 17:17:48.023995       1 shared_informer.go:262] Caches are synced for attach detach
	I1107 17:17:48.028885       1 shared_informer.go:262] Caches are synced for daemon sets
	I1107 17:17:48.040695       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1107 17:17:48.059583       1 shared_informer.go:262] Caches are synced for disruption
	I1107 17:17:48.093865       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1107 17:17:48.094015       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1107 17:17:48.094994       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1107 17:17:48.095030       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1107 17:17:48.152712       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1107 17:17:48.181359       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 17:17:48.224684       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 17:17:48.538831       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 17:17:48.624372       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 17:17:48.624404       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [d128588b435c] <==
	* I1107 17:17:35.802654       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I1107 17:17:35.802795       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I1107 17:17:35.802838       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1107 17:17:35.823572       1 server_others.go:206] "Using iptables Proxier"
	I1107 17:17:35.823628       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1107 17:17:35.823641       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1107 17:17:35.823661       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1107 17:17:35.823700       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 17:17:35.823862       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 17:17:35.824181       1 server.go:661] "Version info" version="v1.25.3"
	I1107 17:17:35.824201       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 17:17:35.824705       1 config.go:226] "Starting endpoint slice config controller"
	I1107 17:17:35.824729       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1107 17:17:35.824729       1 config.go:317] "Starting service config controller"
	I1107 17:17:35.824742       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1107 17:17:35.824785       1 config.go:444] "Starting node config controller"
	I1107 17:17:35.824797       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1107 17:17:35.925677       1 shared_informer.go:262] Caches are synced for node config
	I1107 17:17:35.925674       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1107 17:17:35.925738       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [d4737d2c0cc1] <==
	* 
	* 
	* ==> kube-scheduler [ca313e60699e] <==
	* 
	* 
	* ==> kube-scheduler [fa1fae9e3dd4] <==
	* I1107 17:17:32.057264       1 serving.go:348] Generated self-signed cert in-memory
	W1107 17:17:34.927696       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1107 17:17:34.927730       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 17:17:34.927742       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 17:17:34.927752       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 17:17:35.026876       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1107 17:17:35.026910       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 17:17:35.028404       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1107 17:17:35.032408       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1107 17:17:35.032445       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1107 17:17:35.049814       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 17:17:35.150068       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-11-07 17:15:39 UTC, end at Mon 2022-11-07 17:17:56 UTC. --
	Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.471796    5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
	Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.572533    5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
	Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.673100    5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
	Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.773944    5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
	Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.874639    5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.019502    5996 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.020405    5996 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.026112    5996 apiserver.go:52] "Watching apiserver"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.028911    5996 topology_manager.go:205] "Topology Admit Handler"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.029237    5996 topology_manager.go:205] "Topology Admit Handler"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.034807    5996 kubelet_node_status.go:108] "Node was previously registered" node="pause-171530"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.034917    5996 kubelet_node_status.go:73] "Successfully registered node" node="pause-171530"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044165    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/177a31d0-df11-4105-9f5a-c3effe2fc965-xtables-lock\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044237    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxrf\" (UniqueName: \"kubernetes.io/projected/177a31d0-df11-4105-9f5a-c3effe2fc965-kube-api-access-xlxrf\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044387    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kpcd\" (UniqueName: \"kubernetes.io/projected/4070c2b0-f450-4494-afc9-30615ea8f3c9-kube-api-access-2kpcd\") pod \"coredns-565d847f94-r6gbf\" (UID: \"4070c2b0-f450-4494-afc9-30615ea8f3c9\") " pod="kube-system/coredns-565d847f94-r6gbf"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044450    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/177a31d0-df11-4105-9f5a-c3effe2fc965-lib-modules\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044482    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4070c2b0-f450-4494-afc9-30615ea8f3c9-config-volume\") pod \"coredns-565d847f94-r6gbf\" (UID: \"4070c2b0-f450-4494-afc9-30615ea8f3c9\") " pod="kube-system/coredns-565d847f94-r6gbf"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044514    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/177a31d0-df11-4105-9f5a-c3effe2fc965-kube-proxy\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044543    5996 reconciler.go:169] "Reconciler: start to sync state"
	Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.630307    5996 scope.go:115] "RemoveContainer" containerID="d4737d2c0cc12722054c6a67e64adfcb09ac5d35405d5f62738a911f119801f2"
	Nov 07 17:17:37 pause-171530 kubelet[5996]: I1107 17:17:37.800520    5996 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Nov 07 17:17:44 pause-171530 kubelet[5996]: I1107 17:17:44.973868    5996 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Nov 07 17:17:49 pause-171530 kubelet[5996]: I1107 17:17:49.752701    5996 topology_manager.go:205] "Topology Admit Handler"
	Nov 07 17:17:49 pause-171530 kubelet[5996]: I1107 17:17:49.934212    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pv8z\" (UniqueName: \"kubernetes.io/projected/225d8eea-c00a-46a3-8b89-abb34458db76-kube-api-access-4pv8z\") pod \"storage-provisioner\" (UID: \"225d8eea-c00a-46a3-8b89-abb34458db76\") " pod="kube-system/storage-provisioner"
	Nov 07 17:17:49 pause-171530 kubelet[5996]: I1107 17:17:49.934319    5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/225d8eea-c00a-46a3-8b89-abb34458db76-tmp\") pod \"storage-provisioner\" (UID: \"225d8eea-c00a-46a3-8b89-abb34458db76\") " pod="kube-system/storage-provisioner"
	
	* 
	* ==> storage-provisioner [4e27fc353614] <==
	* I1107 17:17:50.349388       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 17:17:50.361550       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 17:17:50.361616       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 17:17:50.369430       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 17:17:50.369585       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"892faada-f17d-4afd-8626-0abe858770d6", APIVersion:"v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-171530_dd9a8022-cc81-49c9-8ac6-84a620a09cdb became leader
	I1107 17:17:50.369661       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-171530_dd9a8022-cc81-49c9-8ac6-84a620a09cdb!
	I1107 17:17:50.470629       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-171530_dd9a8022-cc81-49c9-8ac6-84a620a09cdb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-171530 -n pause-171530
helpers_test.go:261: (dbg) Run:  kubectl --context pause-171530 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-171530 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-171530 describe pod : exit status 1 (55.686107ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-171530 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (51.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (522.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-171301 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-171301 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: exit status 80 (8m42.436043854s)

                                                
                                                
-- stdout --
	* [calico-171301] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node calico-171301 in cluster calico-171301
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 17:17:59.392919  281054 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:17:59.393207  281054 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:17:59.393222  281054 out.go:309] Setting ErrFile to fd 2...
	I1107 17:17:59.393229  281054 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:17:59.393402  281054 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-3679/.minikube/bin
	I1107 17:17:59.394267  281054 out.go:303] Setting JSON to false
	I1107 17:17:59.397054  281054 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3630,"bootTime":1667837849,"procs":1298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 17:17:59.397142  281054 start.go:126] virtualization: kvm guest
	I1107 17:17:59.399992  281054 out.go:177] * [calico-171301] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 17:17:59.401655  281054 notify.go:220] Checking for updates...
	I1107 17:17:59.403191  281054 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 17:17:59.404684  281054 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 17:17:59.406294  281054 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	I1107 17:17:59.408128  281054 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	I1107 17:17:59.409735  281054 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 17:17:59.411694  281054 config.go:180] Loaded profile config "auto-171300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:59.411788  281054 config.go:180] Loaded profile config "cilium-171301": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:59.411862  281054 config.go:180] Loaded profile config "kubernetes-upgrade-171418": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:17:59.411904  281054 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 17:17:59.482408  281054 docker.go:137] docker version: linux-20.10.21
	I1107 17:17:59.482528  281054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:17:59.633024  281054 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:55 SystemTime:2022-11-07 17:17:59.513471485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:17:59.633170  281054 docker.go:254] overlay module found
	I1107 17:17:59.636897  281054 out.go:177] * Using the docker driver based on user configuration
	I1107 17:17:59.638549  281054 start.go:282] selected driver: docker
	I1107 17:17:59.638577  281054 start.go:808] validating driver "docker" against <nil>
	I1107 17:17:59.638602  281054 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 17:17:59.639794  281054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:17:59.796307  281054 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-07 17:17:59.681970569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:17:59.796445  281054 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 17:17:59.796606  281054 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 17:17:59.800080  281054 out.go:177] * Using Docker driver with root privileges
	I1107 17:17:59.801870  281054 cni.go:95] Creating CNI manager for "calico"
	I1107 17:17:59.801891  281054 start_flags.go:312] Found "Calico" CNI - setting NetworkPlugin=cni
	I1107 17:17:59.801914  281054 start_flags.go:317] config:
	{Name:calico-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:
cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:17:59.803780  281054 out.go:177] * Starting control plane node calico-171301 in cluster calico-171301
	I1107 17:17:59.805279  281054 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 17:17:59.807025  281054 out.go:177] * Pulling base image ...
	I1107 17:17:59.808579  281054 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 17:17:59.808607  281054 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 17:17:59.808630  281054 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1107 17:17:59.808645  281054 cache.go:57] Caching tarball of preloaded images
	I1107 17:17:59.808923  281054 preload.go:174] Found /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 17:17:59.808942  281054 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1107 17:17:59.809084  281054 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/config.json ...
	I1107 17:17:59.809110  281054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/config.json: {Name:mkf42df2050fdbcd2c5f4bf98f35a46153d5f104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:17:59.870497  281054 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1107 17:17:59.870535  281054 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1107 17:17:59.870553  281054 cache.go:208] Successfully downloaded all kic artifacts
	I1107 17:17:59.870606  281054 start.go:364] acquiring machines lock for calico-171301: {Name:mke5fe3643e70c5b237a13ef0e3125292141039c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 17:17:59.870813  281054 start.go:368] acquired machines lock for "calico-171301" in 179.735µs
	I1107 17:17:59.870847  281054 start.go:93] Provisioning new machine with config: &{Name:calico-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 17:17:59.870976  281054 start.go:125] createHost starting for "" (driver="docker")
	I1107 17:17:59.875270  281054 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 17:17:59.875614  281054 start.go:159] libmachine.API.Create for "calico-171301" (driver="docker")
	I1107 17:17:59.875653  281054 client.go:168] LocalClient.Create starting
	I1107 17:17:59.875734  281054 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem
	I1107 17:17:59.875771  281054 main.go:134] libmachine: Decoding PEM data...
	I1107 17:17:59.875790  281054 main.go:134] libmachine: Parsing certificate...
	I1107 17:17:59.875860  281054 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem
	I1107 17:17:59.875881  281054 main.go:134] libmachine: Decoding PEM data...
	I1107 17:17:59.875897  281054 main.go:134] libmachine: Parsing certificate...
	I1107 17:17:59.876302  281054 cli_runner.go:164] Run: docker network inspect calico-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 17:17:59.901434  281054 cli_runner.go:211] docker network inspect calico-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 17:17:59.901504  281054 network_create.go:272] running [docker network inspect calico-171301] to gather additional debugging logs...
	I1107 17:17:59.901553  281054 cli_runner.go:164] Run: docker network inspect calico-171301
	W1107 17:17:59.928998  281054 cli_runner.go:211] docker network inspect calico-171301 returned with exit code 1
	I1107 17:17:59.929043  281054 network_create.go:275] error running [docker network inspect calico-171301]: docker network inspect calico-171301: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-171301
	I1107 17:17:59.929060  281054 network_create.go:277] output of [docker network inspect calico-171301]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-171301
	
	** /stderr **
	I1107 17:17:59.929111  281054 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 17:17:59.974821  281054 network.go:246] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-aa8bc6b4377d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f9:4a:a0:7f}}
	I1107 17:17:59.975753  281054 network.go:246] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-46185e74412a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:46:c3:83:d6}}
	I1107 17:17:59.976546  281054 network.go:246] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-286d4025b62f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:c5:c4:e0:e5}}
	I1107 17:17:59.977515  281054 network.go:246] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-40bab4aeefcc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:b1:29:72:92}}
	I1107 17:17:59.978476  281054 network.go:295] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.85.0:0xc000a26290] misses:0}
	I1107 17:17:59.978515  281054 network.go:241] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1107 17:17:59.978532  281054 network_create.go:115] attempt to create docker network calico-171301 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1107 17:17:59.978591  281054 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-171301 calico-171301
	I1107 17:18:00.115634  281054 network_create.go:99] docker network calico-171301 192.168.85.0/24 created
	I1107 17:18:00.115674  281054 kic.go:106] calculated static IP "192.168.85.2" for the "calico-171301" container
	I1107 17:18:00.115730  281054 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 17:18:00.160163  281054 cli_runner.go:164] Run: docker volume create calico-171301 --label name.minikube.sigs.k8s.io=calico-171301 --label created_by.minikube.sigs.k8s.io=true
	I1107 17:18:00.191382  281054 oci.go:103] Successfully created a docker volume calico-171301
	I1107 17:18:00.191459  281054 cli_runner.go:164] Run: docker run --rm --name calico-171301-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-171301 --entrypoint /usr/bin/test -v calico-171301:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1107 17:18:01.052751  281054 oci.go:107] Successfully prepared a docker volume calico-171301
	I1107 17:18:01.052814  281054 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 17:18:01.052834  281054 kic.go:179] Starting extracting preloaded images to volume ...
	I1107 17:18:01.052923  281054 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-171301:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 17:18:05.020839  281054 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-171301:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (3.96783611s)
	I1107 17:18:05.020877  281054 kic.go:188] duration metric: took 3.968038 seconds to extract preloaded images to volume
	W1107 17:18:05.021014  281054 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 17:18:05.021141  281054 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 17:18:05.175518  281054 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-171301 --name calico-171301 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-171301 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-171301 --network calico-171301 --ip 192.168.85.2 --volume calico-171301:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1107 17:18:05.784003  281054 cli_runner.go:164] Run: docker container inspect calico-171301 --format={{.State.Running}}
	I1107 17:18:05.816740  281054 cli_runner.go:164] Run: docker container inspect calico-171301 --format={{.State.Status}}
	I1107 17:18:05.846914  281054 cli_runner.go:164] Run: docker exec calico-171301 stat /var/lib/dpkg/alternatives/iptables
	I1107 17:18:05.929166  281054 oci.go:144] the created container "calico-171301" has a running status.
	I1107 17:18:05.929207  281054 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15310-3679/.minikube/machines/calico-171301/id_rsa...
	I1107 17:18:06.079628  281054 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15310-3679/.minikube/machines/calico-171301/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 17:18:06.168154  281054 cli_runner.go:164] Run: docker container inspect calico-171301 --format={{.State.Status}}
	I1107 17:18:06.205740  281054 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 17:18:06.205777  281054 kic_runner.go:114] Args: [docker exec --privileged calico-171301 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 17:18:06.292588  281054 cli_runner.go:164] Run: docker container inspect calico-171301 --format={{.State.Status}}
	I1107 17:18:06.319669  281054 machine.go:88] provisioning docker machine ...
	I1107 17:18:06.319717  281054 ubuntu.go:169] provisioning hostname "calico-171301"
	I1107 17:18:06.319786  281054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171301
	I1107 17:18:06.348573  281054 main.go:134] libmachine: Using SSH client type: native
	I1107 17:18:06.348810  281054 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49389 <nil> <nil>}
	I1107 17:18:06.348836  281054 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-171301 && echo "calico-171301" | sudo tee /etc/hostname
	I1107 17:18:06.526384  281054 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-171301
	
	I1107 17:18:06.526489  281054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171301
	I1107 17:18:06.553954  281054 main.go:134] libmachine: Using SSH client type: native
	I1107 17:18:06.554201  281054 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49389 <nil> <nil>}
	I1107 17:18:06.554233  281054 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-171301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-171301/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-171301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 17:18:06.670607  281054 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1107 17:18:06.670641  281054 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15310-3679/.minikube CaCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15310-3679/.minikube}
	I1107 17:18:06.670664  281054 ubuntu.go:177] setting up certificates
	I1107 17:18:06.670675  281054 provision.go:83] configureAuth start
	I1107 17:18:06.670784  281054 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-171301
	I1107 17:18:06.696566  281054 provision.go:138] copyHostCerts
	I1107 17:18:06.696642  281054 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem, removing ...
	I1107 17:18:06.696657  281054 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem
	I1107 17:18:06.696742  281054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem (1082 bytes)
	I1107 17:18:06.696893  281054 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem, removing ...
	I1107 17:18:06.696911  281054 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem
	I1107 17:18:06.696954  281054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem (1123 bytes)
	I1107 17:18:06.697053  281054 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem, removing ...
	I1107 17:18:06.697066  281054 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem
	I1107 17:18:06.697104  281054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem (1675 bytes)
	I1107 17:18:06.697197  281054 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem org=jenkins.calico-171301 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube calico-171301]
	I1107 17:18:06.798281  281054 provision.go:172] copyRemoteCerts
	I1107 17:18:06.798359  281054 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 17:18:06.798417  281054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171301
	I1107 17:18:06.828006  281054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/calico-171301/id_rsa Username:docker}
	I1107 17:18:06.915346  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1107 17:18:06.936970  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1107 17:18:06.957907  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 17:18:06.980120  281054 provision.go:86] duration metric: configureAuth took 309.433255ms
	I1107 17:18:06.980154  281054 ubuntu.go:193] setting minikube options for container-runtime
	I1107 17:18:06.980308  281054 config.go:180] Loaded profile config "calico-171301": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:18:06.980369  281054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171301
	I1107 17:18:07.012128  281054 main.go:134] libmachine: Using SSH client type: native
	I1107 17:18:07.012352  281054 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49389 <nil> <nil>}
	I1107 17:18:07.012387  281054 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 17:18:07.131302  281054 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 17:18:07.131335  281054 ubuntu.go:71] root file system type: overlay
	I1107 17:18:07.131485  281054 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 17:18:07.131544  281054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171301
	I1107 17:18:07.165366  281054 main.go:134] libmachine: Using SSH client type: native
	I1107 17:18:07.165545  281054 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49389 <nil> <nil>}
	I1107 17:18:07.165641  281054 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 17:18:07.297489  281054 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 17:18:07.297571  281054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171301
	I1107 17:18:07.331932  281054 main.go:134] libmachine: Using SSH client type: native
	I1107 17:18:07.332130  281054 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 127.0.0.1 49389 <nil> <nil>}
	I1107 17:18:07.332161  281054 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 17:18:09.080195  281054 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-07 17:18:07.292333923 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1107 17:18:09.080238  281054 machine.go:91] provisioned docker machine in 2.760540091s
	I1107 17:18:09.080250  281054 client.go:171] LocalClient.Create took 9.204590477s
	I1107 17:18:09.080262  281054 start.go:167] duration metric: libmachine.API.Create for "calico-171301" took 9.204650571s
	I1107 17:18:09.080384  281054 start.go:300] post-start starting for "calico-171301" (driver="docker")
	I1107 17:18:09.080393  281054 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 17:18:09.080462  281054 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 17:18:09.080515  281054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171301
	I1107 17:18:09.111672  281054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/calico-171301/id_rsa Username:docker}
	I1107 17:18:09.203557  281054 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 17:18:09.206487  281054 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 17:18:09.206507  281054 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 17:18:09.206521  281054 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 17:18:09.206526  281054 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1107 17:18:09.206534  281054 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/addons for local assets ...
	I1107 17:18:09.206582  281054 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/files for local assets ...
	I1107 17:18:09.206647  281054 filesync.go:149] local asset: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem -> 101292.pem in /etc/ssl/certs
	I1107 17:18:09.206734  281054 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 17:18:09.213851  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /etc/ssl/certs/101292.pem (1708 bytes)
	I1107 17:18:09.235093  281054 start.go:303] post-start completed in 154.692659ms
	I1107 17:18:09.235508  281054 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-171301
	I1107 17:18:09.268448  281054 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/config.json ...
	I1107 17:18:09.268738  281054 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 17:18:09.268790  281054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171301
	I1107 17:18:09.297655  281054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/calico-171301/id_rsa Username:docker}
	I1107 17:18:09.379513  281054 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 17:18:09.383963  281054 start.go:128] duration metric: createHost completed in 9.512966705s
	I1107 17:18:09.383990  281054 start.go:83] releasing machines lock for "calico-171301", held for 9.51315905s
	I1107 17:18:09.384070  281054 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-171301
	I1107 17:18:09.409990  281054 ssh_runner.go:195] Run: systemctl --version
	I1107 17:18:09.410033  281054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171301
	I1107 17:18:09.410107  281054 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 17:18:09.410179  281054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171301
	I1107 17:18:09.440071  281054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/calico-171301/id_rsa Username:docker}
	I1107 17:18:09.440986  281054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/calico-171301/id_rsa Username:docker}
	I1107 17:18:09.553108  281054 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 17:18:09.562210  281054 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1107 17:18:09.577201  281054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:18:09.660961  281054 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1107 17:18:09.763004  281054 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 17:18:09.775707  281054 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1107 17:18:09.775790  281054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 17:18:09.786987  281054 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 17:18:09.803541  281054 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 17:18:09.921554  281054 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 17:18:10.010148  281054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:18:10.102607  281054 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 17:18:10.398121  281054 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1107 17:18:10.487711  281054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 17:18:10.570083  281054 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1107 17:18:10.580816  281054 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1107 17:18:10.580893  281054 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1107 17:18:10.584164  281054 start.go:472] Will wait 60s for crictl version
	I1107 17:18:10.584235  281054 ssh_runner.go:195] Run: sudo crictl version
	I1107 17:18:10.615496  281054 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1107 17:18:10.615567  281054 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 17:18:10.644549  281054 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 17:18:10.678255  281054 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1107 17:18:10.678326  281054 cli_runner.go:164] Run: docker network inspect calico-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 17:18:10.704798  281054 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1107 17:18:10.708439  281054 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 17:18:10.718061  281054 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1107 17:18:10.718155  281054 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 17:18:10.745085  281054 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 17:18:10.745113  281054 docker.go:543] Images already preloaded, skipping extraction
	I1107 17:18:10.745170  281054 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 17:18:10.769018  281054 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1107 17:18:10.769050  281054 cache_images.go:84] Images are preloaded, skipping loading
	I1107 17:18:10.769108  281054 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 17:18:10.843384  281054 cni.go:95] Creating CNI manager for "calico"
	I1107 17:18:10.843419  281054 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 17:18:10.843444  281054 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-171301 NodeName:calico-171301 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1107 17:18:10.843628  281054 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "calico-171301"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 17:18:10.843733  281054 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=calico-171301 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:calico-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I1107 17:18:10.843781  281054 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1107 17:18:10.850859  281054 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 17:18:10.850923  281054 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 17:18:10.857979  281054 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
	I1107 17:18:10.871958  281054 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 17:18:10.885652  281054 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
	I1107 17:18:10.902836  281054 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1107 17:18:10.906784  281054 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 17:18:10.918085  281054 certs.go:54] Setting up /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301 for IP: 192.168.85.2
	I1107 17:18:10.918215  281054 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key
	I1107 17:18:10.918272  281054 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key
	I1107 17:18:10.918333  281054 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/client.key
	I1107 17:18:10.918355  281054 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/client.crt with IP's: []
	I1107 17:18:11.419143  281054 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/client.crt ...
	I1107 17:18:11.419179  281054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/client.crt: {Name:mk176ad4bdd4620d394f32baf61fed481c90ac14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:18:11.419406  281054 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/client.key ...
	I1107 17:18:11.419423  281054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/client.key: {Name:mkb1791387c8987048697925c047cd36dd586ed5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:18:11.419536  281054 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/apiserver.key.43b9df8c
	I1107 17:18:11.419555  281054 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 17:18:11.500217  281054 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/apiserver.crt.43b9df8c ...
	I1107 17:18:11.500250  281054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/apiserver.crt.43b9df8c: {Name:mk27e221fb5ffaacc0dd8a2e4a2c61fd6629e98e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:18:11.500449  281054 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/apiserver.key.43b9df8c ...
	I1107 17:18:11.500466  281054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/apiserver.key.43b9df8c: {Name:mka116755e40b34e765e98f1d96f8e46657ad00a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:18:11.500586  281054 certs.go:320] copying /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/apiserver.crt
	I1107 17:18:11.500662  281054 certs.go:324] copying /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/apiserver.key
	I1107 17:18:11.500730  281054 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/proxy-client.key
	I1107 17:18:11.500750  281054 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/proxy-client.crt with IP's: []
	I1107 17:18:11.813041  281054 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/proxy-client.crt ...
	I1107 17:18:11.813069  281054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/proxy-client.crt: {Name:mk97f711ff747b56da85035bee4b82972cc04e2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:18:11.813263  281054 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/proxy-client.key ...
	I1107 17:18:11.813278  281054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/proxy-client.key: {Name:mkad1c71aca3de4a1631650c76e333f9d7540182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:18:11.813445  281054 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem (1338 bytes)
	W1107 17:18:11.813481  281054 certs.go:384] ignoring /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129_empty.pem, impossibly tiny 0 bytes
	I1107 17:18:11.813492  281054 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 17:18:11.813520  281054 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem (1082 bytes)
	I1107 17:18:11.813543  281054 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem (1123 bytes)
	I1107 17:18:11.813566  281054 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem (1675 bytes)
	I1107 17:18:11.813606  281054 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem (1708 bytes)
	I1107 17:18:11.814188  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 17:18:11.835654  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 17:18:11.854207  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 17:18:11.874294  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/calico-171301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 17:18:11.895060  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 17:18:11.915935  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 17:18:11.935381  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 17:18:11.954541  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 17:18:11.975865  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem --> /usr/share/ca-certificates/10129.pem (1338 bytes)
	I1107 17:18:11.995239  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /usr/share/ca-certificates/101292.pem (1708 bytes)
	I1107 17:18:12.016225  281054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 17:18:12.034819  281054 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 17:18:12.048801  281054 ssh_runner.go:195] Run: openssl version
	I1107 17:18:12.055273  281054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101292.pem && ln -fs /usr/share/ca-certificates/101292.pem /etc/ssl/certs/101292.pem"
	I1107 17:18:12.063628  281054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101292.pem
	I1107 17:18:12.066866  281054 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  7 16:50 /usr/share/ca-certificates/101292.pem
	I1107 17:18:12.066915  281054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101292.pem
	I1107 17:18:12.071830  281054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101292.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 17:18:12.080791  281054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 17:18:12.089589  281054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:18:12.093315  281054 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  7 16:46 /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:18:12.093364  281054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 17:18:12.098436  281054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 17:18:12.106510  281054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10129.pem && ln -fs /usr/share/ca-certificates/10129.pem /etc/ssl/certs/10129.pem"
	I1107 17:18:12.115001  281054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10129.pem
	I1107 17:18:12.118321  281054 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  7 16:50 /usr/share/ca-certificates/10129.pem
	I1107 17:18:12.118375  281054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10129.pem
	I1107 17:18:12.123507  281054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10129.pem /etc/ssl/certs/51391683.0"
	I1107 17:18:12.131174  281054 kubeadm.go:396] StartCluster: {Name:calico-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:calico-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 17:18:12.131316  281054 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 17:18:12.153326  281054 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 17:18:12.160665  281054 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 17:18:12.167961  281054 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1107 17:18:12.168033  281054 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 17:18:12.175711  281054 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 17:18:12.175752  281054 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 17:18:12.221754  281054 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1107 17:18:12.221822  281054 kubeadm.go:317] [preflight] Running pre-flight checks
	I1107 17:18:12.260119  281054 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I1107 17:18:12.260199  281054 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1021-gcp
	I1107 17:18:12.260277  281054 kubeadm.go:317] OS: Linux
	I1107 17:18:12.260352  281054 kubeadm.go:317] CGROUPS_CPU: enabled
	I1107 17:18:12.260433  281054 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I1107 17:18:12.260509  281054 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I1107 17:18:12.260580  281054 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I1107 17:18:12.260677  281054 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I1107 17:18:12.260758  281054 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I1107 17:18:12.260823  281054 kubeadm.go:317] CGROUPS_PIDS: enabled
	I1107 17:18:12.260902  281054 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I1107 17:18:12.260994  281054 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I1107 17:18:12.333983  281054 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 17:18:12.334113  281054 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 17:18:12.334239  281054 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 17:18:12.488149  281054 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 17:18:12.491233  281054 out.go:204]   - Generating certificates and keys ...
	I1107 17:18:12.491411  281054 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1107 17:18:12.491472  281054 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1107 17:18:12.701913  281054 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 17:18:12.950637  281054 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1107 17:18:13.110585  281054 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1107 17:18:13.282930  281054 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1107 17:18:13.505963  281054 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1107 17:18:13.506185  281054 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-171301 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1107 17:18:13.811624  281054 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1107 17:18:13.811775  281054 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-171301 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1107 17:18:13.981750  281054 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 17:18:14.219099  281054 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 17:18:14.407431  281054 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1107 17:18:14.407591  281054 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 17:18:14.571993  281054 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 17:18:14.862585  281054 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 17:18:15.000434  281054 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 17:18:15.150347  281054 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 17:18:15.165424  281054 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 17:18:15.168299  281054 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 17:18:15.168374  281054 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1107 17:18:15.256404  281054 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 17:18:15.258946  281054 out.go:204]   - Booting up control plane ...
	I1107 17:18:15.259086  281054 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 17:18:15.260260  281054 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 17:18:15.261455  281054 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 17:18:15.262270  281054 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 17:18:15.264205  281054 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 17:18:26.768599  281054 kubeadm.go:317] [apiclient] All control plane components are healthy after 11.504360 seconds
	I1107 17:18:26.768750  281054 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 17:18:26.786534  281054 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 17:18:27.305321  281054 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 17:18:27.305572  281054 kubeadm.go:317] [mark-control-plane] Marking the node calico-171301 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 17:18:27.814442  281054 kubeadm.go:317] [bootstrap-token] Using token: owg5vb.1q9l8iy4cnjby3we
	I1107 17:18:27.816608  281054 out.go:204]   - Configuring RBAC rules ...
	I1107 17:18:27.816734  281054 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 17:18:27.820352  281054 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 17:18:27.827326  281054 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 17:18:27.830427  281054 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 17:18:27.833374  281054 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 17:18:27.842655  281054 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 17:18:27.855112  281054 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 17:18:28.179258  281054 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I1107 17:18:28.226841  281054 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I1107 17:18:28.228264  281054 kubeadm.go:317] 
	I1107 17:18:28.228349  281054 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I1107 17:18:28.228364  281054 kubeadm.go:317] 
	I1107 17:18:28.228453  281054 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I1107 17:18:28.228465  281054 kubeadm.go:317] 
	I1107 17:18:28.228498  281054 kubeadm.go:317]   mkdir -p $HOME/.kube
	I1107 17:18:28.228569  281054 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 17:18:28.228632  281054 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 17:18:28.228639  281054 kubeadm.go:317] 
	I1107 17:18:28.228703  281054 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I1107 17:18:28.228709  281054 kubeadm.go:317] 
	I1107 17:18:28.228766  281054 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 17:18:28.228772  281054 kubeadm.go:317] 
	I1107 17:18:28.228833  281054 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I1107 17:18:28.228928  281054 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 17:18:28.229013  281054 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 17:18:28.229019  281054 kubeadm.go:317] 
	I1107 17:18:28.229127  281054 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 17:18:28.229224  281054 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I1107 17:18:28.229264  281054 kubeadm.go:317] 
	I1107 17:18:28.229372  281054 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token owg5vb.1q9l8iy4cnjby3we \
	I1107 17:18:28.229496  281054 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:bc13d7f97d1880668c06891bac287920e0f16017877774af02dac7aaa8d64b21 \
	I1107 17:18:28.229524  281054 kubeadm.go:317] 	--control-plane 
	I1107 17:18:28.229530  281054 kubeadm.go:317] 
	I1107 17:18:28.229635  281054 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I1107 17:18:28.229641  281054 kubeadm.go:317] 
	I1107 17:18:28.229739  281054 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token owg5vb.1q9l8iy4cnjby3we \
	I1107 17:18:28.229859  281054 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:bc13d7f97d1880668c06891bac287920e0f16017877774af02dac7aaa8d64b21 
	I1107 17:18:28.234989  281054 kubeadm.go:317] W1107 17:18:12.213133    1194 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1107 17:18:28.235286  281054 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1021-gcp\n", err: exit status 1
	I1107 17:18:28.235427  281054 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 17:18:28.235452  281054 cni.go:95] Creating CNI manager for "calico"
	I1107 17:18:28.239073  281054 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I1107 17:18:28.240977  281054 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1107 17:18:28.241020  281054 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
	I1107 17:18:28.262837  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 17:18:29.994952  281054 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.732071841s)
	I1107 17:18:29.995005  281054 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 17:18:29.995091  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262 minikube.k8s.io/name=calico-171301 minikube.k8s.io/updated_at=2022_11_07T17_18_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:29.995096  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:30.131459  281054 ops.go:34] apiserver oom_adj: -16
	I1107 17:18:30.131573  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:30.739675  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:31.239296  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:31.739716  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:32.240054  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:32.739258  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:33.239549  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:33.739308  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:34.239937  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:34.739634  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:35.239973  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:35.739949  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:36.240016  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:36.739749  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:37.239486  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:37.739968  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:38.239248  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:38.739943  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:39.239188  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:39.739204  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:40.239873  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:40.739979  281054 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 17:18:40.844113  281054 kubeadm.go:1067] duration metric: took 10.849066296s to wait for elevateKubeSystemPrivileges.
	I1107 17:18:40.844147  281054 kubeadm.go:398] StartCluster complete in 28.712984118s
	I1107 17:18:40.844168  281054 settings.go:142] acquiring lock: {Name:mke91789b0d6e4070893f671805542745cc27d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:18:40.844281  281054 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15310-3679/kubeconfig
	I1107 17:18:40.846196  281054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/kubeconfig: {Name:mk0b702cd34f333a37178f1520735cf3ce85aad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 17:18:41.371858  281054 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-171301" rescaled to 1
	I1107 17:18:41.371928  281054 start.go:212] Will wait 5m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 17:18:41.373829  281054 out.go:177] * Verifying Kubernetes components...
	I1107 17:18:41.372075  281054 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 17:18:41.372341  281054 config.go:180] Loaded profile config "calico-171301": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:18:41.372362  281054 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I1107 17:18:41.376035  281054 addons.go:65] Setting storage-provisioner=true in profile "calico-171301"
	I1107 17:18:41.376067  281054 addons.go:227] Setting addon storage-provisioner=true in "calico-171301"
	W1107 17:18:41.376085  281054 addons.go:236] addon storage-provisioner should already be in state true
	I1107 17:18:41.376133  281054 host.go:66] Checking if "calico-171301" exists ...
	I1107 17:18:41.376662  281054 cli_runner.go:164] Run: docker container inspect calico-171301 --format={{.State.Status}}
	I1107 17:18:41.376957  281054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:18:41.377041  281054 addons.go:65] Setting default-storageclass=true in profile "calico-171301"
	I1107 17:18:41.377058  281054 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-171301"
	I1107 17:18:41.377325  281054 cli_runner.go:164] Run: docker container inspect calico-171301 --format={{.State.Status}}
	I1107 17:18:41.440511  281054 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 17:18:41.444881  281054 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 17:18:41.444909  281054 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 17:18:41.444975  281054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171301
	I1107 17:18:41.456686  281054 addons.go:227] Setting addon default-storageclass=true in "calico-171301"
	W1107 17:18:41.456718  281054 addons.go:236] addon default-storageclass should already be in state true
	I1107 17:18:41.456752  281054 host.go:66] Checking if "calico-171301" exists ...
	I1107 17:18:41.457218  281054 cli_runner.go:164] Run: docker container inspect calico-171301 --format={{.State.Status}}
	I1107 17:18:41.499838  281054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/calico-171301/id_rsa Username:docker}
	I1107 17:18:41.503113  281054 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 17:18:41.503141  281054 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 17:18:41.503198  281054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-171301
	I1107 17:18:41.560582  281054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49389 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/calico-171301/id_rsa Username:docker}
	I1107 17:18:41.601094  281054 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 17:18:41.602381  281054 node_ready.go:35] waiting up to 5m0s for node "calico-171301" to be "Ready" ...
	I1107 17:18:41.624397  281054 node_ready.go:49] node "calico-171301" has status "Ready":"True"
	I1107 17:18:41.624433  281054 node_ready.go:38] duration metric: took 22.017269ms waiting for node "calico-171301" to be "Ready" ...
	I1107 17:18:41.624447  281054 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:18:41.635116  281054 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace to be "Ready" ...
	I1107 17:18:41.725664  281054 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 17:18:41.842129  281054 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 17:18:43.734164  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:18:44.347385  281054 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.74624783s)
	I1107 17:18:44.347575  281054 start.go:826] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I1107 17:18:44.377098  281054 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.534871418s)
	I1107 17:18:44.377148  281054 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.651459684s)
	I1107 17:18:44.379294  281054 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1107 17:18:44.382219  281054 addons.go:488] enableAddons completed in 3.009842372s
	I1107 17:18:46.148324  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:18:48.648874  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:18:50.649548  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:18:53.148531  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:18:55.149678  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:18:57.647975  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:18:59.649090  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:02.152118  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:04.648566  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:07.146937  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:09.147371  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:11.148419  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:13.649025  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:16.149117  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:18.156383  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:20.648731  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:22.648990  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:25.148022  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:27.149238  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:29.648140  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:31.648693  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:33.648937  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:35.650016  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:38.148729  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:40.154013  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:42.648727  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:45.152117  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:47.647052  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:49.649764  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:51.655581  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:54.148063  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:56.149959  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:19:58.648559  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:01.148259  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:03.648144  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:05.648411  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:07.648465  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:10.149055  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:12.647493  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:14.648865  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:17.148181  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:19.648378  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:21.648878  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:24.147993  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:26.148956  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:28.648499  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:30.648717  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:32.649772  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:35.148057  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:37.650254  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:40.147964  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:42.149295  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:44.648506  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:47.148402  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:49.647472  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:51.648464  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:54.147515  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:56.649406  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:20:58.649518  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:01.148547  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:03.151147  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:05.648202  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:07.649008  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:09.649514  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:11.649907  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:14.150065  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:16.648681  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:18.649424  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:21.152024  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:23.648490  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:26.149603  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:28.648242  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:31.148794  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:33.149158  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:35.648267  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:38.147640  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:40.148458  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:42.646763  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:44.647425  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:46.647701  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:48.648198  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:50.648439  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:52.648664  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:54.648848  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:57.147799  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:21:59.647365  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:01.648606  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:04.148234  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:06.648068  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:08.649796  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:11.149021  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:13.647745  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:15.648505  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:18.148670  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:20.149299  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:22.648076  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:24.648545  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:27.148518  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:29.648995  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:32.149401  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:34.648188  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:37.147783  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:39.148737  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:41.648362  281054 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:41.654448  281054 pod_ready.go:81] duration metric: took 4m0.019235022s waiting for pod "calico-kube-controllers-7df895d496-56qcm" in "kube-system" namespace to be "Ready" ...
	E1107 17:22:41.654479  281054 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1107 17:22:41.654492  281054 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-pcxmc" in "kube-system" namespace to be "Ready" ...
	I1107 17:22:43.667574  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:45.669432  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:48.167350  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:50.167855  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:52.668146  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:54.668881  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:57.167843  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:22:59.221896  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:01.668233  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:04.167890  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:06.168884  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:08.668496  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:10.668829  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:13.168454  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:15.668240  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:18.168059  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:20.168120  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:22.168380  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:24.668184  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:26.671968  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:29.168264  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:31.168602  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:33.168920  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:35.668249  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:38.168742  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:40.168915  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:42.667654  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:45.169293  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:47.666925  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:49.668100  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:51.676232  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:54.167271  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:56.167859  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:23:58.227106  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:00.669169  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:03.168346  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:05.169957  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:07.668032  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:09.669600  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:12.168521  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:14.168940  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:16.169248  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:18.172354  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:20.172622  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:22.669328  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:25.167963  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:27.668779  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:29.668846  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:32.167835  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:34.168312  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:36.668111  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:39.168149  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:41.670373  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:44.167998  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:46.668464  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:49.169223  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:51.671252  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:54.167049  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:56.167882  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:24:58.168539  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:00.668507  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:03.168461  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:05.169128  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:07.668360  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:09.668608  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:12.169224  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:14.667015  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:16.668578  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:19.168663  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:21.669132  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:24.168721  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:26.169827  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:28.668111  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:30.668412  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:33.168350  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:35.170367  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:37.668273  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:39.668621  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:42.173021  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:44.668899  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:47.168055  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:49.667841  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:51.668320  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:54.168178  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:56.668579  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:25:59.169194  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:01.667838  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:03.668583  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:06.167493  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:08.168846  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:10.169461  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:12.667401  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:14.667597  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:16.668359  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:19.167554  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:21.169368  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:23.668873  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:26.168919  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:28.670502  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:31.169110  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:33.667627  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:35.668171  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:37.668806  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:40.169446  281054 pod_ready.go:102] pod "calico-node-pcxmc" in "kube-system" namespace has status "Ready":"False"
	I1107 17:26:41.725062  281054 pod_ready.go:81] duration metric: took 4m0.070554882s waiting for pod "calico-node-pcxmc" in "kube-system" namespace to be "Ready" ...
	E1107 17:26:41.725093  281054 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1107 17:26:41.725111  281054 pod_ready.go:38] duration metric: took 8m0.100648017s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 17:26:41.727911  281054 out.go:177] 
	W1107 17:26:41.731443  281054 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W1107 17:26:41.731473  281054 out.go:239] * 
	* 
	W1107 17:26:41.732772  281054 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 17:26:41.736451  281054 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (522.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (280.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.183046625s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142832714s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.200207218s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.171077992s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:20:11.088071   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155383844s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.175674458s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147379515s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.177028143s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.162592723s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:22:21.383984   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:22:21.389268   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:22:21.399583   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:22:21.419938   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:22:21.460228   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:22:21.540532   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:22:21.701184   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:22:22.021849   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:22:22.662374   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:22:23.854388   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:22:23.943556   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:22:26.504534   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.163032276s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.181084031s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/false/DNS (280.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (372.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.161719292s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.210569326s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150599033s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.170851945s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.168228218s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:21:56.170078   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.167851442s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134724211s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:22:31.625528   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:22:31.704885   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:22:41.664827   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:22:41.670121   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:22:41.680430   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:22:41.700764   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:22:41.741068   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:22:41.821412   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:22:41.866599   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:22:41.981954   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:22:42.302515   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:22:42.943444   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:22:44.223957   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:22:46.784688   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:22:48.389799   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137271391s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:22:51.905190   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:23:02.146144   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:23:02.347543   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141233597s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:23:22.627181   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.158879773s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:23:55.741782   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:24:03.588035   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:24:05.982879   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:24:11.029715   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:24:11.035008   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:24:11.045281   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:24:11.065601   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:24:11.105980   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:24:11.187123   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:24:11.347534   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:24:11.668473   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:24:12.308937   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:24:13.589227   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:24:16.150293   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14327481s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:24:31.511123   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:25:32.953159   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:25:34.754104   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.185965036s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.176497993s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (372.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (360.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155032494s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:24:51.991978   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143495034s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:25:05.228464   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:25:07.423374   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:25:11.088246   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159300975s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:25:25.508462   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.166794839s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.171862519s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.153125602s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:26:29.344151   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159026725s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:26:54.873959   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:26:56.169449   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16272444s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.161572782s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:27:48.389989   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 17:27:49.068628   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136879974s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135311681s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:29:11.030143   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:29:13.185246   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:30:17.106920   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:30:18.387324   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142557935s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (360.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (334.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150817077s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14853875s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.158058388s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.177771707s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.176596068s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:27:21.384220   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:27:31.704432   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126173828s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:27:41.664894   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146636419s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:28:09.349568   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13874213s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136134243s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:29:38.714522   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138473366s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default
E1107 17:30:11.088030   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
E1107 17:30:15.827271   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:30:15.832551   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:30:15.842821   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:30:15.863101   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:30:15.903423   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:30:15.984579   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:30:16.145076   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:30:16.465903   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144066814s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1107 17:30:20.948439   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:30:26.068806   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-171300 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146675264s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (334.24s)
E1107 17:37:21.384367   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:37:31.704884   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 17:37:41.665038   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:37:48.389831   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 17:38:26.070361   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
E1107 17:38:37.662684   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:38:37.668086   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:38:37.678369   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:38:37.698644   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:38:37.738991   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:38:37.819280   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:38:37.979706   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:38:38.300315   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:38:38.941466   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:38:40.221726   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:38:42.782849   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:38:44.429788   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:38:45.500559   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:38:47.903245   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:38:58.143575   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:39:04.710648   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:39:11.029978   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:39:18.623720   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:39:21.607508   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:39:49.291919   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:39:59.583991   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory
E1107 17:40:08.546049   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:40:11.088132   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
E1107 17:40:15.828012   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:40:34.075411   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:40:42.226146   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
E1107 17:41:09.911140   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
E1107 17:41:21.504972   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/old-k8s-version-172642/client.crt: no such file or directory

                                                
                                    

Test pass (252/277)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 5.5
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.25.3/json-events 4.09
11 TestDownloadOnly/v1.25.3/preload-exists 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.27
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.19
18 TestDownloadOnlyKic 2.81
19 TestBinaryMirror 0.88
20 TestOffline 58.93
22 TestAddons/Setup 111.53
24 TestAddons/parallel/Registry 15.65
25 TestAddons/parallel/Ingress 27.56
26 TestAddons/parallel/MetricsServer 5.93
27 TestAddons/parallel/HelmTiller 18.44
29 TestAddons/parallel/CSI 48.32
30 TestAddons/parallel/Headlamp 9.57
31 TestAddons/parallel/CloudSpanner 5.35
33 TestAddons/serial/GCPAuth 40.67
34 TestAddons/StoppedEnableDisable 11.12
35 TestCertOptions 32.81
36 TestCertExpiration 248.49
37 TestDockerFlags 42.2
38 TestForceSystemdFlag 40.84
39 TestForceSystemdEnv 32.65
40 TestKVMDriverInstallOrUpdate 2.06
44 TestErrorSpam/setup 30.19
45 TestErrorSpam/start 0.98
46 TestErrorSpam/status 1.13
47 TestErrorSpam/pause 1.42
48 TestErrorSpam/unpause 1.43
49 TestErrorSpam/stop 11.08
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 40.67
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 55.94
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.08
60 TestFunctional/serial/CacheCmd/cache/add_remote 2.83
61 TestFunctional/serial/CacheCmd/cache/add_local 0.8
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
63 TestFunctional/serial/CacheCmd/cache/list 0.07
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
65 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
66 TestFunctional/serial/CacheCmd/cache/delete 0.14
67 TestFunctional/serial/MinikubeKubectlCmd 0.13
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
69 TestFunctional/serial/ExtraConfig 53.99
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 1.14
72 TestFunctional/serial/LogsFileCmd 1.17
74 TestFunctional/parallel/ConfigCmd 0.59
75 TestFunctional/parallel/DashboardCmd 13.6
76 TestFunctional/parallel/DryRun 0.65
77 TestFunctional/parallel/InternationalLanguage 0.26
78 TestFunctional/parallel/StatusCmd 1.24
81 TestFunctional/parallel/ServiceCmd 12.22
82 TestFunctional/parallel/ServiceCmdConnect 10.67
83 TestFunctional/parallel/AddonsCmd 0.22
84 TestFunctional/parallel/PersistentVolumeClaim 30.29
86 TestFunctional/parallel/SSHCmd 0.86
87 TestFunctional/parallel/CpCmd 1.63
88 TestFunctional/parallel/MySQL 23.17
89 TestFunctional/parallel/FileSync 0.37
90 TestFunctional/parallel/CertSync 2.28
94 TestFunctional/parallel/NodeLabels 0.06
96 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
98 TestFunctional/parallel/License 0.17
100 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
102 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.26
103 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
104 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
108 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
109 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
110 TestFunctional/parallel/ProfileCmd/profile_list 0.45
111 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
112 TestFunctional/parallel/MountCmd/any-port 9.78
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.44
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.44
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.44
117 TestFunctional/parallel/ImageCommands/ImageBuild 4.31
118 TestFunctional/parallel/ImageCommands/Setup 0.96
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.36
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.17
121 TestFunctional/parallel/MountCmd/specific-port 2.39
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.98
123 TestFunctional/parallel/DockerEnv/bash 1.41
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.28
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
127 TestFunctional/parallel/Version/short 0.09
128 TestFunctional/parallel/Version/components 0.95
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.05
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.6
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 4.24
133 TestFunctional/delete_addon-resizer_images 0.09
134 TestFunctional/delete_my-image_image 0.02
135 TestFunctional/delete_minikube_cached_images 0.02
138 TestIngressAddonLegacy/StartLegacyK8sCluster 77.85
140 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.13
141 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.42
142 TestIngressAddonLegacy/serial/ValidateIngressAddons 36.88
145 TestJSONOutput/start/Command 54.17
146 TestJSONOutput/start/Audit 0
148 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
149 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
151 TestJSONOutput/pause/Command 0.56
152 TestJSONOutput/pause/Audit 0
154 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/unpause/Command 0.54
158 TestJSONOutput/unpause/Audit 0
160 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/stop/Command 11.03
164 TestJSONOutput/stop/Audit 0
166 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
168 TestErrorJSONOutput 0.29
170 TestKicCustomNetwork/create_custom_network 27.85
171 TestKicCustomNetwork/use_default_bridge_network 26.53
172 TestKicExistingNetwork 27.65
173 TestKicCustomSubnet 29.04
174 TestMainNoArgs 0.07
175 TestMinikubeProfile 57.35
178 TestMountStart/serial/StartWithMountFirst 5.64
179 TestMountStart/serial/VerifyMountFirst 0.34
180 TestMountStart/serial/StartWithMountSecond 5.61
181 TestMountStart/serial/VerifyMountSecond 0.33
182 TestMountStart/serial/DeleteFirst 1.57
183 TestMountStart/serial/VerifyMountPostDelete 0.33
184 TestMountStart/serial/Stop 1.25
185 TestMountStart/serial/RestartStopped 6.83
186 TestMountStart/serial/VerifyMountPostStop 0.32
189 TestMultiNode/serial/FreshStart2Nodes 95.36
190 TestMultiNode/serial/DeployApp2Nodes 3.85
191 TestMultiNode/serial/PingHostFrom2Pods 0.96
192 TestMultiNode/serial/AddNode 34.45
193 TestMultiNode/serial/ProfileList 0.37
194 TestMultiNode/serial/CopyFile 11.77
195 TestMultiNode/serial/StopNode 2.39
196 TestMultiNode/serial/StartAfterStop 20.99
197 TestMultiNode/serial/RestartKeepsNodes 127.34
198 TestMultiNode/serial/DeleteNode 4.97
199 TestMultiNode/serial/StopMultiNode 21.74
200 TestMultiNode/serial/RestartMultiNode 79.05
201 TestMultiNode/serial/ValidateNameConflict 28.74
206 TestPreload 124.57
208 TestScheduledStopUnix 100.87
209 TestSkaffold 55.76
211 TestInsufficientStorage 11.24
212 TestRunningBinaryUpgrade 71.5
214 TestKubernetesUpgrade 382.04
215 TestMissingContainerUpgrade 99.07
217 TestNoKubernetes/serial/StartNoK8sWithVersion 0.13
218 TestNoKubernetes/serial/StartWithK8s 40.53
219 TestNoKubernetes/serial/StartWithStopK8s 15.55
231 TestNoKubernetes/serial/Start 7.77
232 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
233 TestNoKubernetes/serial/ProfileList 6.59
234 TestNoKubernetes/serial/Stop 1.33
235 TestNoKubernetes/serial/StartNoArgs 8.66
236 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.48
237 TestStoppedBinaryUpgrade/Setup 0.43
238 TestStoppedBinaryUpgrade/Upgrade 81.16
239 TestStoppedBinaryUpgrade/MinikubeLogs 1.41
248 TestPause/serial/Start 94.81
249 TestNetworkPlugins/group/auto/Start 81.62
250 TestNetworkPlugins/group/kindnet/Start 53.01
252 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
253 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
254 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
255 TestNetworkPlugins/group/kindnet/DNS 0.16
256 TestNetworkPlugins/group/kindnet/Localhost 0.14
257 TestNetworkPlugins/group/kindnet/HairPin 0.14
258 TestNetworkPlugins/group/cilium/Start 91.19
259 TestNetworkPlugins/group/auto/KubeletFlags 0.4
260 TestNetworkPlugins/group/auto/NetCatPod 12.25
261 TestNetworkPlugins/group/auto/DNS 0.18
262 TestNetworkPlugins/group/auto/Localhost 0.18
263 TestNetworkPlugins/group/auto/HairPin 5.17
265 TestNetworkPlugins/group/false/Start 42.95
266 TestNetworkPlugins/group/false/KubeletFlags 0.52
267 TestNetworkPlugins/group/false/NetCatPod 10.25
269 TestNetworkPlugins/group/cilium/ControllerPod 5.02
270 TestNetworkPlugins/group/cilium/KubeletFlags 0.43
271 TestNetworkPlugins/group/cilium/NetCatPod 11.01
272 TestNetworkPlugins/group/cilium/DNS 0.17
273 TestNetworkPlugins/group/cilium/Localhost 0.16
274 TestNetworkPlugins/group/cilium/HairPin 0.18
275 TestNetworkPlugins/group/bridge/Start 44.17
276 TestNetworkPlugins/group/bridge/KubeletFlags 0.45
277 TestNetworkPlugins/group/bridge/NetCatPod 9.28
279 TestNetworkPlugins/group/enable-default-cni/Start 301.5
280 TestNetworkPlugins/group/kubenet/Start 40.21
281 TestNetworkPlugins/group/kubenet/KubeletFlags 0.44
282 TestNetworkPlugins/group/kubenet/NetCatPod 10.25
284 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
285 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
288 TestStartStop/group/old-k8s-version/serial/FirstStart 115.38
290 TestStartStop/group/no-preload/serial/FirstStart 308.91
291 TestStartStop/group/old-k8s-version/serial/DeployApp 7.36
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.7
293 TestStartStop/group/old-k8s-version/serial/Stop 10.82
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
295 TestStartStop/group/old-k8s-version/serial/SecondStart 394.95
297 TestStartStop/group/embed-certs/serial/FirstStart 301.92
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 289.98
300 TestStartStop/group/no-preload/serial/DeployApp 9.33
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.68
302 TestStartStop/group/no-preload/serial/Stop 10.79
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
304 TestStartStop/group/no-preload/serial/SecondStart 550.96
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
307 TestStartStop/group/embed-certs/serial/DeployApp 7.41
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
309 TestStartStop/group/old-k8s-version/serial/Pause 2.98
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.89
311 TestStartStop/group/embed-certs/serial/Stop 10.83
313 TestStartStop/group/newest-cni/serial/FirstStart 39.75
314 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
315 TestStartStop/group/embed-certs/serial/SecondStart 549.7
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.94
319 TestStartStop/group/newest-cni/serial/Stop 11.02
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.73
321 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.81
322 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
323 TestStartStop/group/newest-cni/serial/SecondStart 22.32
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 549.72
326 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
327 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
328 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.45
329 TestStartStop/group/newest-cni/serial/Pause 3.62
330 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
331 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
332 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.38
333 TestStartStop/group/no-preload/serial/Pause 3.18
334 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
335 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
336 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.39
337 TestStartStop/group/embed-certs/serial/Pause 3.03
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
340 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.96
x
+
TestDownloadOnly/v1.16.0/json-events (5.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-164526 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-164526 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.500799729s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (5.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-164526
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-164526: exit status 85 (89.97468ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-164526 | jenkins | v1.28.0 | 07 Nov 22 16:45 UTC |          |
	|         | -p download-only-164526        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 16:45:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 16:45:26.127873   10141 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:45:26.128013   10141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:45:26.128024   10141 out.go:309] Setting ErrFile to fd 2...
	I1107 16:45:26.128028   10141 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:45:26.128133   10141 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-3679/.minikube/bin
	W1107 16:45:26.128269   10141 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15310-3679/.minikube/config/config.json: open /home/jenkins/minikube-integration/15310-3679/.minikube/config/config.json: no such file or directory
	I1107 16:45:26.128895   10141 out.go:303] Setting JSON to true
	I1107 16:45:26.129717   10141 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1677,"bootTime":1667837849,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 16:45:26.129790   10141 start.go:126] virtualization: kvm guest
	I1107 16:45:26.133302   10141 out.go:97] [download-only-164526] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	W1107 16:45:26.133425   10141 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball: no such file or directory
	I1107 16:45:26.133458   10141 notify.go:220] Checking for updates...
	I1107 16:45:26.135434   10141 out.go:169] MINIKUBE_LOCATION=15310
	I1107 16:45:26.137516   10141 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 16:45:26.140030   10141 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	I1107 16:45:26.141916   10141 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	I1107 16:45:26.143784   10141 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1107 16:45:26.147055   10141 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 16:45:26.147236   10141 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 16:45:26.173571   10141 docker.go:137] docker version: linux-20.10.21
	I1107 16:45:26.173665   10141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:45:26.996235   10141 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2022-11-07 16:45:26.193476798 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:45:26.996356   10141 docker.go:254] overlay module found
	I1107 16:45:26.998662   10141 out.go:97] Using the docker driver based on user configuration
	I1107 16:45:26.998690   10141 start.go:282] selected driver: docker
	I1107 16:45:26.998705   10141 start.go:808] validating driver "docker" against <nil>
	I1107 16:45:26.998817   10141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:45:27.112277   10141 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2022-11-07 16:45:27.018198872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:45:27.112409   10141 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1107 16:45:27.112867   10141 start_flags.go:384] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I1107 16:45:27.112978   10141 start_flags.go:883] Wait components to verify : map[apiserver:true system_pods:true]
	I1107 16:45:27.115986   10141 out.go:169] Using Docker driver with root privileges
	I1107 16:45:27.117767   10141 cni.go:95] Creating CNI manager for ""
	I1107 16:45:27.117784   10141 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1107 16:45:27.117796   10141 start_flags.go:317] config:
	{Name:download-only-164526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-164526 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 16:45:27.119575   10141 out.go:97] Starting control plane node download-only-164526 in cluster download-only-164526
	I1107 16:45:27.119612   10141 cache.go:120] Beginning downloading kic base image for docker with docker
	I1107 16:45:27.120912   10141 out.go:97] Pulling base image ...
	I1107 16:45:27.120935   10141 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 16:45:27.121036   10141 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1107 16:45:27.140800   10141 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1107 16:45:27.141126   10141 image.go:60] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1107 16:45:27.141253   10141 image.go:120] Writing gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1107 16:45:27.144068   10141 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1107 16:45:27.144090   10141 cache.go:57] Caching tarball of preloaded images
	I1107 16:45:27.144222   10141 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 16:45:27.146688   10141 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1107 16:45:27.146712   10141 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1107 16:45:27.176065   10141 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1107 16:45:29.817200   10141 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1107 16:45:29.817311   10141 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1107 16:45:30.574155   10141 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1107 16:45:30.574490   10141 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/download-only-164526/config.json ...
	I1107 16:45:30.574520   10141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/download-only-164526/config.json: {Name:mkf660d065270e7a704f81f67d0382d78daa92e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 16:45:30.574711   10141 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 16:45:30.574909   10141 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/15310-3679/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-164526"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (4.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-164526 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-164526 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.090758081s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (4.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-164526
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-164526: exit status 85 (88.47454ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-164526 | jenkins | v1.28.0 | 07 Nov 22 16:45 UTC |          |
	|         | -p download-only-164526        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-164526 | jenkins | v1.28.0 | 07 Nov 22 16:45 UTC |          |
	|         | -p download-only-164526        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/07 16:45:31
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 16:45:31.723775   10305 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:45:31.723891   10305 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:45:31.723901   10305 out.go:309] Setting ErrFile to fd 2...
	I1107 16:45:31.723906   10305 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:45:31.724027   10305 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-3679/.minikube/bin
	W1107 16:45:31.724190   10305 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15310-3679/.minikube/config/config.json: open /home/jenkins/minikube-integration/15310-3679/.minikube/config/config.json: no such file or directory
	I1107 16:45:31.724638   10305 out.go:303] Setting JSON to true
	I1107 16:45:31.725426   10305 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1683,"bootTime":1667837849,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 16:45:31.725495   10305 start.go:126] virtualization: kvm guest
	I1107 16:45:31.728145   10305 out.go:97] [download-only-164526] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 16:45:31.728269   10305 notify.go:220] Checking for updates...
	I1107 16:45:31.729940   10305 out.go:169] MINIKUBE_LOCATION=15310
	I1107 16:45:31.731685   10305 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 16:45:31.733521   10305 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	I1107 16:45:31.735234   10305 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	I1107 16:45:31.736999   10305 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-164526"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-164526
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.19s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.81s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-164536 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-164536 --force --alsologtostderr --driver=docker  --container-runtime=docker: (1.809450002s)
helpers_test.go:175: Cleaning up "download-docker-164536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-164536
--- PASS: TestDownloadOnlyKic (2.81s)

                                                
                                    
x
+
TestBinaryMirror (0.88s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-164539 --alsologtostderr --binary-mirror http://127.0.0.1:46427 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-164539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-164539
--- PASS: TestBinaryMirror (0.88s)

                                                
                                    
x
+
TestOffline (58.93s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-171219 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-171219 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (56.371345575s)
helpers_test.go:175: Cleaning up "offline-docker-171219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-171219
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-171219: (2.56073497s)
--- PASS: TestOffline (58.93s)

                                                
                                    
x
+
TestAddons/Setup (111.53s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-164540 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-164540 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m51.5318738s)
--- PASS: TestAddons/Setup (111.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: registry stabilized in 8.772949ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-95sj9" [e0464136-15dd-4d68-9261-9694666b374e] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010508068s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:288: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-lfp5j" [da11be30-39af-40af-9838-5eb93c935506] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:288: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008038033s
addons_test.go:293: (dbg) Run:  kubectl --context addons-164540 delete po -l run=registry-test --now
addons_test.go:298: (dbg) Run:  kubectl --context addons-164540 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:298: (dbg) Done: kubectl --context addons-164540 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.86233273s)
addons_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p addons-164540 ip
2022/11/07 16:47:46 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p addons-164540 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.65s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (27.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Run:  kubectl --context addons-164540 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Done: kubectl --context addons-164540 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (6.262056921s)
addons_test.go:185: (dbg) Run:  kubectl --context addons-164540 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Non-zero exit: kubectl --context addons-164540 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (502.034619ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.103.188.74:443: connect: connection refused

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-164540 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context addons-164540 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [4925480a-be95-4526-88e0-2b37d1e3f1d1] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [4925480a-be95-4526-88e0-2b37d1e3f1d1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [4925480a-be95-4526-88e0-2b37d1e3f1d1] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.009211753s
addons_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p addons-164540 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:239: (dbg) Run:  kubectl --context addons-164540 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p addons-164540 ip
addons_test.go:250: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p addons-164540 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p addons-164540 addons disable ingress-dns --alsologtostderr -v=1: (1.044617253s)
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-164540 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p addons-164540 addons disable ingress --alsologtostderr -v=1: (7.910380632s)
--- PASS: TestAddons/parallel/Ingress (27.56s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: metrics-server stabilized in 8.750178ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-769cd898cd-wvj7z" [97041f64-f326-47d8-beb4-f45cf8009bf3] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010374336s
addons_test.go:368: (dbg) Run:  kubectl --context addons-164540 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: (dbg) Run:  out/minikube-linux-amd64 -p addons-164540 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.93s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (18.44s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: tiller-deploy stabilized in 2.021633ms
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-75pnw" [390fa1c5-23a5-49a3-849e-04bab13b1860] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008520612s
addons_test.go:426: (dbg) Run:  kubectl --context addons-164540 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:426: (dbg) Done: kubectl --context addons-164540 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.975913264s)
addons_test.go:431: kubectl --context addons-164540 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:426: (dbg) Run:  kubectl --context addons-164540 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:426: (dbg) Done: kubectl --context addons-164540 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.801057913s)
addons_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p addons-164540 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (18.44s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: csi-hostpath-driver pods stabilized in 6.391511ms
addons_test.go:517: (dbg) Run:  kubectl --context addons-164540 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:522: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-164540 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-164540 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:527: (dbg) Run:  kubectl --context addons-164540 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:532: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [4827ea50-e750-47d7-8b55-fe8d7800174e] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [4827ea50-e750-47d7-8b55-fe8d7800174e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [4827ea50-e750-47d7-8b55-fe8d7800174e] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:532: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 22.006701434s
addons_test.go:537: (dbg) Run:  kubectl --context addons-164540 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:542: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-164540 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-164540 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:547: (dbg) Run:  kubectl --context addons-164540 delete pod task-pv-pod
addons_test.go:553: (dbg) Run:  kubectl --context addons-164540 delete pvc hpvc
addons_test.go:559: (dbg) Run:  kubectl --context addons-164540 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:564: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-164540 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:569: (dbg) Run:  kubectl --context addons-164540 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:574: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [acc5dd47-6276-4e7e-948b-f25ae15f6533] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [acc5dd47-6276-4e7e-948b-f25ae15f6533] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [acc5dd47-6276-4e7e-948b-f25ae15f6533] Running
addons_test.go:574: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.00825302s
addons_test.go:579: (dbg) Run:  kubectl --context addons-164540 delete pod task-pv-pod-restore
addons_test.go:583: (dbg) Run:  kubectl --context addons-164540 delete pvc hpvc-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-164540 delete volumesnapshot new-snapshot-demo
addons_test.go:591: (dbg) Run:  out/minikube-linux-amd64 -p addons-164540 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:591: (dbg) Done: out/minikube-linux-amd64 -p addons-164540 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.187650719s)
addons_test.go:595: (dbg) Run:  out/minikube-linux-amd64 -p addons-164540 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (9.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-164540 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-164540 --alsologtostderr -v=1: (1.499943621s)
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-5f4cf474d8-r9rc8" [73a79ce9-9c8f-4a89-a532-9e647815e164] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-r9rc8" [73a79ce9-9c8f-4a89-a532-9e647815e164] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-r9rc8" [73a79ce9-9c8f-4a89-a532-9e647815e164] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-r9rc8" [73a79ce9-9c8f-4a89-a532-9e647815e164] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 8.073898962s
--- PASS: TestAddons/parallel/Headlamp (9.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:342: "cloud-spanner-emulator-6c47ff8fb6-cmjmn" [b587ace2-a961-4782-9678-af30ada698c1] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009010606s
addons_test.go:762: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-164540
--- PASS: TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (40.67s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:606: (dbg) Run:  kubectl --context addons-164540 create -f testdata/busybox.yaml
addons_test.go:613: (dbg) Run:  kubectl --context addons-164540 create sa gcp-auth-test
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [44255ce4-35ca-41af-9c5f-4316b992297b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [44255ce4-35ca-41af-9c5f-4316b992297b] Running
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.006905765s
addons_test.go:625: (dbg) Run:  kubectl --context addons-164540 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:637: (dbg) Run:  kubectl --context addons-164540 describe sa gcp-auth-test
addons_test.go:675: (dbg) Run:  kubectl --context addons-164540 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:688: (dbg) Run:  out/minikube-linux-amd64 -p addons-164540 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:688: (dbg) Done: out/minikube-linux-amd64 -p addons-164540 addons disable gcp-auth --alsologtostderr -v=1: (6.122115628s)
addons_test.go:704: (dbg) Run:  out/minikube-linux-amd64 -p addons-164540 addons enable gcp-auth
addons_test.go:704: (dbg) Done: out/minikube-linux-amd64 -p addons-164540 addons enable gcp-auth: (2.145172897s)
addons_test.go:710: (dbg) Run:  kubectl --context addons-164540 apply -f testdata/private-image.yaml
addons_test.go:717: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-5c86c669bd-ngsfh" [d9af2705-42de-4115-89be-80ff5c5d52ca] Pending
helpers_test.go:342: "private-image-5c86c669bd-ngsfh" [d9af2705-42de-4115-89be-80ff5c5d52ca] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-5c86c669bd-ngsfh" [d9af2705-42de-4115-89be-80ff5c5d52ca] Running
addons_test.go:717: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 13.00676642s
addons_test.go:723: (dbg) Run:  kubectl --context addons-164540 apply -f testdata/private-image-eu.yaml
addons_test.go:728: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-64c96f687b-lnkps" [3976196e-cec6-4079-adae-6371ddcdef74] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-64c96f687b-lnkps" [3976196e-cec6-4079-adae-6371ddcdef74] Running
addons_test.go:728: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 10.00652294s
--- PASS: TestAddons/serial/GCPAuth (40.67s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-164540
addons_test.go:135: (dbg) Done: out/minikube-linux-amd64 stop -p addons-164540: (10.913363978s)
addons_test.go:139: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-164540
addons_test.go:143: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-164540
--- PASS: TestAddons/StoppedEnableDisable (11.12s)

                                                
                                    
x
+
TestCertOptions (32.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-171318 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-171318 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (28.240550209s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-171318 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-171318 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-171318 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-171318" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-171318
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-171318: (3.656032528s)
--- PASS: TestCertOptions (32.81s)

                                                
                                    
x
+
TestCertExpiration (248.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-171219 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-171219 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (38.339234266s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-171219 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-171219 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (27.797943952s)
helpers_test.go:175: Cleaning up "cert-expiration-171219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-171219
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-171219: (2.350075243s)
--- PASS: TestCertExpiration (248.49s)

                                                
                                    
x
+
TestDockerFlags (42.2s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-171335 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-171335 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (36.076835095s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-171335 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-171335 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-171335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-171335
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-171335: (5.115991027s)
--- PASS: TestDockerFlags (42.20s)

                                                
                                    
x
+
TestForceSystemdFlag (40.84s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-171219 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-171219 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (38.01449308s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-171219 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:175: Cleaning up "force-systemd-flag-171219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-171219

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-171219: (2.335867718s)
--- PASS: TestForceSystemdFlag (40.84s)

                                                
                                    
x
+
TestForceSystemdEnv (32.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-171303 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-171303 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (29.981219259s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-171303 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-171303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-171303
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-171303: (2.192965534s)
--- PASS: TestForceSystemdEnv (32.65s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.06s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.06s)

                                                
                                    
x
+
TestErrorSpam/setup (30.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-164920 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-164920 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-164920 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-164920 --driver=docker  --container-runtime=docker: (30.190164168s)
--- PASS: TestErrorSpam/setup (30.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 start --dry-run
--- PASS: TestErrorSpam/start (0.98s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 pause
--- PASS: TestErrorSpam/pause (1.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 unpause
--- PASS: TestErrorSpam/unpause (1.43s)

                                                
                                    
x
+
TestErrorSpam/stop (11.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 stop: (10.827133837s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-164920 --log_dir /tmp/nospam-164920 stop
--- PASS: TestErrorSpam/stop (11.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/test/nested/copy/10129/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.67s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165008 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-165008 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (40.667780194s)
--- PASS: TestFunctional/serial/StartWithProxy (40.67s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (55.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165008 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-165008 --alsologtostderr -v=8: (55.939119301s)
functional_test.go:656: soft start took 55.939795591s for "functional-165008" cluster.
--- PASS: TestFunctional/serial/SoftStart (55.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-165008 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-165008 cache add k8s.gcr.io/pause:3.1: (1.059519881s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-165008 /tmp/TestFunctionalserialCacheCmdcacheadd_local832054646/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 cache add minikube-local-cache-test:functional-165008
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 cache delete minikube-local-cache-test:functional-165008
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-165008
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165008 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (342.473736ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 cache reload
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 kubectl -- --context functional-165008 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-165008 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (53.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165008 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1107 16:52:31.704458   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 16:52:31.711035   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 16:52:31.721208   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 16:52:31.741475   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 16:52:31.781741   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 16:52:31.862055   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 16:52:32.022434   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 16:52:32.342958   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 16:52:32.983843   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 16:52:34.264371   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 16:52:36.825428   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 16:52:41.945982   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-165008 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (53.989858114s)
functional_test.go:754: restart took 53.989992433s for "functional-165008" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (53.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-165008 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-165008 logs: (1.142567832s)
--- PASS: TestFunctional/serial/LogsCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 logs --file /tmp/TestFunctionalserialLogsFileCmd201897226/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-165008 logs --file /tmp/TestFunctionalserialLogsFileCmd201897226/001/logs.txt: (1.164966293s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165008 config get cpus: exit status 14 (94.855289ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165008 config get cpus: exit status 14 (111.222274ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-165008 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-165008 --alsologtostderr -v=1] ...

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:506: unable to kill pid 54774: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.60s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165008 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-165008 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (291.214461ms)

                                                
                                                
-- stdout --
	* [functional-165008] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 16:53:01.303268   54013 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:53:01.303402   54013 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:53:01.303416   54013 out.go:309] Setting ErrFile to fd 2...
	I1107 16:53:01.303424   54013 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:53:01.303539   54013 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-3679/.minikube/bin
	I1107 16:53:01.304130   54013 out.go:303] Setting JSON to false
	I1107 16:53:01.305550   54013 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2132,"bootTime":1667837849,"procs":708,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 16:53:01.305620   54013 start.go:126] virtualization: kvm guest
	I1107 16:53:01.310685   54013 out.go:177] * [functional-165008] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 16:53:01.312660   54013 notify.go:220] Checking for updates...
	I1107 16:53:01.314709   54013 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 16:53:01.316721   54013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 16:53:01.320138   54013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	I1107 16:53:01.322479   54013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	I1107 16:53:01.325318   54013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 16:53:01.329792   54013 config.go:180] Loaded profile config "functional-165008": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 16:53:01.330249   54013 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 16:53:01.370454   54013 docker.go:137] docker version: linux-20.10.21
	I1107 16:53:01.370573   54013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:53:01.496345   54013 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-07 16:53:01.399484873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:53:01.496481   54013 docker.go:254] overlay module found
	I1107 16:53:01.500340   54013 out.go:177] * Using the docker driver based on existing profile
	I1107 16:53:01.502022   54013 start.go:282] selected driver: docker
	I1107 16:53:01.502045   54013 start.go:808] validating driver "docker" against &{Name:functional-165008 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-165008 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 16:53:01.502182   54013 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 16:53:01.504720   54013 out.go:177] 
	W1107 16:53:01.506354   54013 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1107 16:53:01.507942   54013 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165008 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165008 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-165008 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (263.216512ms)

                                                
                                                
-- stdout --
	* [functional-165008] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 16:53:01.041984   53813 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:53:01.042132   53813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:53:01.042145   53813 out.go:309] Setting ErrFile to fd 2...
	I1107 16:53:01.042151   53813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:53:01.042370   53813 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-3679/.minikube/bin
	I1107 16:53:01.043062   53813 out.go:303] Setting JSON to false
	I1107 16:53:01.044460   53813 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2132,"bootTime":1667837849,"procs":709,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 16:53:01.044538   53813 start.go:126] virtualization: kvm guest
	I1107 16:53:01.048811   53813 out.go:177] * [functional-165008] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	I1107 16:53:01.050423   53813 notify.go:220] Checking for updates...
	I1107 16:53:01.052096   53813 out.go:177]   - MINIKUBE_LOCATION=15310
	I1107 16:53:01.053678   53813 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 16:53:01.055221   53813 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	I1107 16:53:01.056669   53813 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	I1107 16:53:01.058248   53813 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 16:53:01.060438   53813 config.go:180] Loaded profile config "functional-165008": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 16:53:01.060915   53813 driver.go:365] Setting default libvirt URI to qemu:///system
	I1107 16:53:01.094147   53813 docker.go:137] docker version: linux-20.10.21
	I1107 16:53:01.094239   53813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:53:01.204968   53813 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-11-07 16:53:01.114600373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 16:53:01.205112   53813 docker.go:254] overlay module found
	I1107 16:53:01.207515   53813 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1107 16:53:01.210586   53813 start.go:282] selected driver: docker
	I1107 16:53:01.210613   53813 start.go:808] validating driver "docker" against &{Name:functional-165008 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-165008 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1107 16:53:01.210782   53813 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 16:53:01.213297   53813 out.go:177] 
	W1107 16:53:01.214785   53813 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1107 16:53:01.216253   53813 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-165008 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-165008 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-nzz4v" [213c0177-93bb-40fa-b781-2ee936818dca] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-nzz4v" [213c0177-93bb-40fa-b781-2ee936818dca] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 10.006665414s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1476: found endpoint: https://192.168.49.2:30676
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1511: found endpoint for hello-node: http://192.168.49.2:30676
--- PASS: TestFunctional/parallel/ServiceCmd (12.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-165008 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-165008 expose deployment hello-node-connect --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-2p8wv" [06de0997-040b-4db2-a492-d31194afb033] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-2p8wv" [06de0997-040b-4db2-a492-d31194afb033] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.007388844s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.49.2:31406
functional_test.go:1605: http://192.168.49.2:31406: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-2p8wv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31406
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [c3f94c65-9159-410e-b84c-cebda9cddb7c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.015160479s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-165008 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-165008 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-165008 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-165008 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [b61dae28-f6db-413c-9fe4-2d7e373273ca] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [b61dae28-f6db-413c-9fe4-2d7e373273ca] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [b61dae28-f6db-413c-9fe4-2d7e373273ca] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.007902977s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-165008 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-165008 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-165008 delete -f testdata/storage-provisioner/pod.yaml: (1.160523418s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-165008 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [a71f9c26-6de4-45bb-8953-77a70d4bab61] Pending
helpers_test.go:342: "sp-pod" [a71f9c26-6de4-45bb-8953-77a70d4bab61] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [a71f9c26-6de4-45bb-8953-77a70d4bab61] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.007901915s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-165008 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.29s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh -n functional-165008 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 cp functional-165008:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2678896724/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh -n functional-165008 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-165008 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-596b7fcdbf-2swlh" [8df94abb-6fa0-426e-a15a-2edbfa9f4557] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-2swlh" [8df94abb-6fa0-426e-a15a-2edbfa9f4557] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-2swlh" [8df94abb-6fa0-426e-a15a-2edbfa9f4557] Running
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.044096098s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-165008 exec mysql-596b7fcdbf-2swlh -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-165008 exec mysql-596b7fcdbf-2swlh -- mysql -ppassword -e "show databases;": exit status 1 (172.978145ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-165008 exec mysql-596b7fcdbf-2swlh -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-165008 exec mysql-596b7fcdbf-2swlh -- mysql -ppassword -e "show databases;": exit status 1 (131.107254ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-165008 exec mysql-596b7fcdbf-2swlh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.17s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/10129/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "sudo cat /etc/test/nested/copy/10129/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/10129.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "sudo cat /etc/ssl/certs/10129.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/10129.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "sudo cat /usr/share/ca-certificates/10129.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/101292.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "sudo cat /etc/ssl/certs/101292.pem"
E1107 16:53:12.667949   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/101292.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "sudo cat /usr/share/ca-certificates/101292.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-165008 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165008 ssh "sudo systemctl is-active crio": exit status 1 (417.572112ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
2022/11/07 16:53:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-165008 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-165008 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [5e3b0b41-780c-418e-9106-ef41a9a40267] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [5e3b0b41-780c-418e-9106-ef41a9a40267] Running
E1107 16:52:52.186826   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.006254682s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-165008 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.102.94.25 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-165008 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "373.24023ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "72.182651ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1362: Took "355.472281ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "72.881943ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-165008 /tmp/TestFunctionalparallelMountCmdany-port322847797/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1667839979180654900" to /tmp/TestFunctionalparallelMountCmdany-port322847797/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1667839979180654900" to /tmp/TestFunctionalparallelMountCmdany-port322847797/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1667839979180654900" to /tmp/TestFunctionalparallelMountCmdany-port322847797/001/test-1667839979180654900
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165008 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (386.811296ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  7 16:52 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  7 16:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  7 16:52 test-1667839979180654900
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh cat /mount-9p/test-1667839979180654900

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-165008 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [e92b5899-9005-45b6-9936-fc19de247a55] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [e92b5899-9005-45b6-9936-fc19de247a55] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [e92b5899-9005-45b6-9936-fc19de247a55] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [e92b5899-9005-45b6-9936-fc19de247a55] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [e92b5899-9005-45b6-9936-fc19de247a55] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.007172542s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-165008 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-165008 /tmp/TestFunctionalparallelMountCmdany-port322847797/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165008 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-165008
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-165008
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165008 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-165008 | fca7480a5ccdd | 30B    |
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
| gcr.io/google-containers/addon-resizer      | functional-165008 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 76c69feac34e8 | 142MB  |
| docker.io/library/nginx                     | alpine            | b997307a58ab5 | 23.6MB |
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165008 image ls --format json:
[{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"fca7480a5ccdd93a2e179c81e1b2df8c46aba5c1325ff22b4253a2a7092c5852","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-165008"],"size":"30"},{"id":"0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d96
7ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-165008"],"size":"32900000"},{"id":"76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"beaaf00edd38a6cb405376588e708084376a6786e722
231dc8a1482730e0c041","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23600000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"
id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"}]
functional_test.go:265: (dbg) Stderr: out/minikube-linux-amd64 -p functional-165008 image ls --format json:
E1107 16:53:24.239814   59292 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 80ef5bdd-8ac0-49b1-ad11-1af0d978838a
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165008 image ls --format yaml:
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-165008
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: fca7480a5ccdd93a2e179c81e1b2df8c46aba5c1325ff22b4253a2a7092c5852
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-165008
size: "30"
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23600000"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165008 ssh pgrep buildkitd: exit status 1 (487.127156ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image build -t localhost/my-image:functional-165008 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-165008 image build -t localhost/my-image:functional-165008 testdata/build: (3.525135906s)
functional_test.go:316: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165008 image build -t localhost/my-image:functional-165008 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 4b3c5a7a0c1b
Removing intermediate container 4b3c5a7a0c1b
---> b0f9a98de124
Step 3/3 : ADD content.txt /
---> 6b3454c9980c
Successfully built 6b3454c9980c
Successfully tagged localhost/my-image:functional-165008
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-165008
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image load --daemon gcr.io/google-containers/addon-resizer:functional-165008

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-165008 image load --daemon gcr.io/google-containers/addon-resizer:functional-165008: (4.122132129s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image load --daemon gcr.io/google-containers/addon-resizer:functional-165008

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-165008 image load --daemon gcr.io/google-containers/addon-resizer:functional-165008: (2.864267644s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-165008 /tmp/TestFunctionalparallelMountCmdspecific-port868662946/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165008 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (472.255373ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-165008 /tmp/TestFunctionalparallelMountCmdspecific-port868662946/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165008 ssh "sudo umount -f /mount-9p": exit status 1 (481.974625ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-165008 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-165008 /tmp/TestFunctionalparallelMountCmdspecific-port868662946/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-165008
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image load --daemon gcr.io/google-containers/addon-resizer:functional-165008

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-165008 image load --daemon gcr.io/google-containers/addon-resizer:functional-165008: (3.883799696s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.98s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-165008 docker-env) && out/minikube-linux-amd64 status -p functional-165008"
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-165008 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image save gcr.io/google-containers/addon-resizer:functional-165008 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-165008 image save gcr.io/google-containers/addon-resizer:functional-165008 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.054054065s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image rm gcr.io/google-containers/addon-resizer:functional-165008

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-165008 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (2.301047855s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-165008
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-165008 image save --daemon gcr.io/google-containers/addon-resizer:functional-165008
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-165008 image save --daemon gcr.io/google-containers/addon-resizer:functional-165008: (4.189649041s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-165008
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.24s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-165008
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-165008
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-165008
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (77.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-165341 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1107 16:53:53.629194   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-165341 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m17.847371126s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (77.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-165341 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-165341 addons enable ingress --alsologtostderr -v=5: (11.133852361s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.13s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-165341 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (36.88s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:165: (dbg) Run:  kubectl --context ingress-addon-legacy-165341 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1107 16:55:15.550881   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
addons_test.go:165: (dbg) Done: kubectl --context ingress-addon-legacy-165341 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.027152043s)
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-165341 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context ingress-addon-legacy-165341 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [136ad8d8-4e4a-4d7a-a6ed-b98a5120e47d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [136ad8d8-4e4a-4d7a-a6ed-b98a5120e47d] Running
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.006072624s
addons_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-165341 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:239: (dbg) Run:  kubectl --context ingress-addon-legacy-165341 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-165341 ip
addons_test.go:250: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-165341 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-165341 addons disable ingress-dns --alsologtostderr -v=1: (4.276585482s)
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-165341 addons disable ingress --alsologtostderr -v=1
addons_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-165341 addons disable ingress --alsologtostderr -v=1: (7.299192034s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (36.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (54.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-165550 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-165550 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (54.172121128s)
--- PASS: TestJSONOutput/start/Command (54.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-165550 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-165550 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11.03s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-165550 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-165550 --output=json --user=testUser: (11.025615392s)
--- PASS: TestJSONOutput/stop/Command (11.03s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.29s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-165658 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-165658 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (98.742062ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"44114e4f-f063-4db8-b13a-7a4a1f2b1fe5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-165658] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7dc889cc-5407-40a6-9c1e-25dae1a1f52f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15310"}}
	{"specversion":"1.0","id":"f4b17bca-414f-4b26-b138-73c584705dc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a3e35d72-afa0-4baa-942a-e97c0fd9e9de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig"}}
	{"specversion":"1.0","id":"f59eca7a-adec-463b-a067-5567984110b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube"}}
	{"specversion":"1.0","id":"da67572f-39b1-4b8c-8436-5768bb83f5ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"83cc6d7b-844c-4e96-9153-b7708b559b57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-165658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-165658
--- PASS: TestErrorJSONOutput (0.29s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-165658 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-165658 --network=: (25.67240241s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-165658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-165658
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-165658: (2.153639333s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.85s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-165726 --network=bridge
E1107 16:57:31.704382   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 16:57:48.390032   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 16:57:48.396038   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 16:57:48.406325   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 16:57:48.426657   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 16:57:48.466967   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 16:57:48.547298   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 16:57:48.707700   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 16:57:49.028018   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 16:57:49.668332   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 16:57:50.948994   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-165726 --network=bridge: (24.515285962s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-165726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-165726
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-165726: (1.988983721s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.53s)

                                                
                                    
x
+
TestKicExistingNetwork (27.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-165753 --network=existing-network
E1107 16:57:53.509686   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 16:57:58.630526   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 16:57:59.391994   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 16:58:08.871495   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-165753 --network=existing-network: (25.490918651s)
helpers_test.go:175: Cleaning up "existing-network-165753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-165753
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-165753: (1.99279369s)
--- PASS: TestKicExistingNetwork (27.65s)

                                                
                                    
x
+
TestKicCustomSubnet (29.04s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-165820 --subnet=192.168.60.0/24
E1107 16:58:29.351927   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-165820 --subnet=192.168.60.0/24: (26.836620019s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-165820 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-165820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-165820
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-165820: (2.181916404s)
--- PASS: TestKicCustomSubnet (29.04s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (57.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-165850 --driver=docker  --container-runtime=docker
E1107 16:59:10.313937   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-165850 --driver=docker  --container-runtime=docker: (27.031548125s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-165850 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-165850 --driver=docker  --container-runtime=docker: (24.686566066s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-165850
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-165850
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-165850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-165850
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-165850: (2.139009414s)
helpers_test.go:175: Cleaning up "first-165850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-165850
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-165850: (2.255252171s)
--- PASS: TestMinikubeProfile (57.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-165947 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-165947 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.638390464s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-165947 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-165947 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-165947 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.608125195s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165947 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-165947 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-165947 --alsologtostderr -v=5: (1.568642954s)
--- PASS: TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165947 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-165947
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-165947: (1.248248222s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.83s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-165947
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-165947: (5.834228022s)
--- PASS: TestMountStart/serial/RestartStopped (6.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-165947 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-170011 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1107 17:00:11.725733   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
E1107 17:00:12.366631   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
E1107 17:00:13.647367   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
E1107 17:00:16.208226   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
E1107 17:00:21.328655   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
E1107 17:00:31.569826   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
E1107 17:00:32.234377   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 17:00:52.049982   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
E1107 17:01:33.010545   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-170011 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m34.811839844s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-170011 -- rollout status deployment/busybox: (1.983100851s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- exec busybox-65db55d5d6-9crr6 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- exec busybox-65db55d5d6-w5nmn -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- exec busybox-65db55d5d6-9crr6 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- exec busybox-65db55d5d6-w5nmn -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- exec busybox-65db55d5d6-9crr6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- exec busybox-65db55d5d6-w5nmn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- exec busybox-65db55d5d6-9crr6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- exec busybox-65db55d5d6-9crr6 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- exec busybox-65db55d5d6-w5nmn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-170011 -- exec busybox-65db55d5d6-w5nmn -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (34.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-170011 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-170011 -v 3 --alsologtostderr: (33.721434861s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (34.45s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 cp testdata/cp-test.txt multinode-170011:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 cp multinode-170011:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1250069898/001/cp-test_multinode-170011.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 cp multinode-170011:/home/docker/cp-test.txt multinode-170011-m02:/home/docker/cp-test_multinode-170011_multinode-170011-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011-m02 "sudo cat /home/docker/cp-test_multinode-170011_multinode-170011-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 cp multinode-170011:/home/docker/cp-test.txt multinode-170011-m03:/home/docker/cp-test_multinode-170011_multinode-170011-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011-m03 "sudo cat /home/docker/cp-test_multinode-170011_multinode-170011-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 cp testdata/cp-test.txt multinode-170011-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 cp multinode-170011-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1250069898/001/cp-test_multinode-170011-m02.txt
E1107 17:02:31.704454   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 cp multinode-170011-m02:/home/docker/cp-test.txt multinode-170011:/home/docker/cp-test_multinode-170011-m02_multinode-170011.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011 "sudo cat /home/docker/cp-test_multinode-170011-m02_multinode-170011.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 cp multinode-170011-m02:/home/docker/cp-test.txt multinode-170011-m03:/home/docker/cp-test_multinode-170011-m02_multinode-170011-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011-m03 "sudo cat /home/docker/cp-test_multinode-170011-m02_multinode-170011-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 cp testdata/cp-test.txt multinode-170011-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 cp multinode-170011-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1250069898/001/cp-test_multinode-170011-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 cp multinode-170011-m03:/home/docker/cp-test.txt multinode-170011:/home/docker/cp-test_multinode-170011-m03_multinode-170011.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011 "sudo cat /home/docker/cp-test_multinode-170011-m03_multinode-170011.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 cp multinode-170011-m03:/home/docker/cp-test.txt multinode-170011-m02:/home/docker/cp-test_multinode-170011-m03_multinode-170011-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 ssh -n multinode-170011-m02 "sudo cat /home/docker/cp-test_multinode-170011-m03_multinode-170011-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-170011 node stop m03: (1.26127959s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-170011 status: exit status 7 (562.32202ms)

                                                
                                                
-- stdout --
	multinode-170011
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-170011-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-170011-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-170011 status --alsologtostderr: exit status 7 (561.127986ms)

                                                
                                                
-- stdout --
	multinode-170011
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-170011-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-170011-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 17:02:40.108193  122932 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:02:40.108331  122932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:02:40.108342  122932 out.go:309] Setting ErrFile to fd 2...
	I1107 17:02:40.108347  122932 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:02:40.108457  122932 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-3679/.minikube/bin
	I1107 17:02:40.108614  122932 out.go:303] Setting JSON to false
	I1107 17:02:40.108646  122932 mustload.go:65] Loading cluster: multinode-170011
	I1107 17:02:40.108736  122932 notify.go:220] Checking for updates...
	I1107 17:02:40.109007  122932 config.go:180] Loaded profile config "multinode-170011": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:02:40.109024  122932 status.go:255] checking status of multinode-170011 ...
	I1107 17:02:40.109461  122932 cli_runner.go:164] Run: docker container inspect multinode-170011 --format={{.State.Status}}
	I1107 17:02:40.134451  122932 status.go:330] multinode-170011 host status = "Running" (err=<nil>)
	I1107 17:02:40.134472  122932 host.go:66] Checking if "multinode-170011" exists ...
	I1107 17:02:40.134682  122932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-170011
	I1107 17:02:40.159426  122932 host.go:66] Checking if "multinode-170011" exists ...
	I1107 17:02:40.159667  122932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 17:02:40.159704  122932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-170011
	I1107 17:02:40.184178  122932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49227 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/multinode-170011/id_rsa Username:docker}
	I1107 17:02:40.267737  122932 ssh_runner.go:195] Run: systemctl --version
	I1107 17:02:40.271320  122932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:02:40.280293  122932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 17:02:40.383403  122932 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-11-07 17:02:40.300523251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1107 17:02:40.383953  122932 kubeconfig.go:92] found "multinode-170011" server: "https://192.168.58.2:8443"
	I1107 17:02:40.383979  122932 api_server.go:165] Checking apiserver status ...
	I1107 17:02:40.384007  122932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 17:02:40.393922  122932 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1841/cgroup
	I1107 17:02:40.401596  122932 api_server.go:181] apiserver freezer: "2:freezer:/docker/a714f5c143df937b7db79c1eef071696ea399cc16566a8514554cda67512f247/kubepods/burstable/podf3e17b8c12e097b21b1a8989cc25188b/6fa40ddf64caec48dbebc56a3441972c393d34dad64886919a6472d530bb60fc"
	I1107 17:02:40.401659  122932 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a714f5c143df937b7db79c1eef071696ea399cc16566a8514554cda67512f247/kubepods/burstable/podf3e17b8c12e097b21b1a8989cc25188b/6fa40ddf64caec48dbebc56a3441972c393d34dad64886919a6472d530bb60fc/freezer.state
	I1107 17:02:40.408425  122932 api_server.go:203] freezer state: "THAWED"
	I1107 17:02:40.408457  122932 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1107 17:02:40.412810  122932 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1107 17:02:40.412838  122932 status.go:421] multinode-170011 apiserver status = Running (err=<nil>)
	I1107 17:02:40.412850  122932 status.go:257] multinode-170011 status: &{Name:multinode-170011 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 17:02:40.412866  122932 status.go:255] checking status of multinode-170011-m02 ...
	I1107 17:02:40.413076  122932 cli_runner.go:164] Run: docker container inspect multinode-170011-m02 --format={{.State.Status}}
	I1107 17:02:40.435972  122932 status.go:330] multinode-170011-m02 host status = "Running" (err=<nil>)
	I1107 17:02:40.436003  122932 host.go:66] Checking if "multinode-170011-m02" exists ...
	I1107 17:02:40.436220  122932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-170011-m02
	I1107 17:02:40.461325  122932 host.go:66] Checking if "multinode-170011-m02" exists ...
	I1107 17:02:40.461605  122932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 17:02:40.461685  122932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-170011-m02
	I1107 17:02:40.484729  122932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49232 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/multinode-170011-m02/id_rsa Username:docker}
	I1107 17:02:40.567230  122932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 17:02:40.576768  122932 status.go:257] multinode-170011-m02 status: &{Name:multinode-170011-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1107 17:02:40.576814  122932 status.go:255] checking status of multinode-170011-m03 ...
	I1107 17:02:40.577072  122932 cli_runner.go:164] Run: docker container inspect multinode-170011-m03 --format={{.State.Status}}
	I1107 17:02:40.602062  122932 status.go:330] multinode-170011-m03 host status = "Stopped" (err=<nil>)
	I1107 17:02:40.602091  122932 status.go:343] host is not running, skipping remaining checks
	I1107 17:02:40.602100  122932 status.go:257] multinode-170011-m03 status: &{Name:multinode-170011-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (20.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 node start m03 --alsologtostderr
E1107 17:02:48.390068   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 17:02:54.930910   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-170011 node start m03 --alsologtostderr: (20.177868197s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (20.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (127.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-170011
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-170011
E1107 17:03:16.075278   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-170011: (22.659460242s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-170011 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-170011 --wait=true -v=8 --alsologtostderr: (1m44.542695995s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-170011
--- PASS: TestMultiNode/serial/RestartKeepsNodes (127.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 node delete m03
E1107 17:05:11.088268   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-170011 node delete m03: (4.28880851s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-170011 stop: (21.501414787s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-170011 status: exit status 7 (113.932198ms)

                                                
                                                
-- stdout --
	multinode-170011
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-170011-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-170011 status --alsologtostderr: exit status 7 (120.052638ms)

                                                
                                                
-- stdout --
	multinode-170011
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-170011-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 17:05:35.586332  140177 out.go:296] Setting OutFile to fd 1 ...
	I1107 17:05:35.586461  140177 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:05:35.586474  140177 out.go:309] Setting ErrFile to fd 2...
	I1107 17:05:35.586482  140177 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 17:05:35.586611  140177 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-3679/.minikube/bin
	I1107 17:05:35.586816  140177 out.go:303] Setting JSON to false
	I1107 17:05:35.586851  140177 mustload.go:65] Loading cluster: multinode-170011
	I1107 17:05:35.586944  140177 notify.go:220] Checking for updates...
	I1107 17:05:35.587232  140177 config.go:180] Loaded profile config "multinode-170011": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1107 17:05:35.587248  140177 status.go:255] checking status of multinode-170011 ...
	I1107 17:05:35.587608  140177 cli_runner.go:164] Run: docker container inspect multinode-170011 --format={{.State.Status}}
	I1107 17:05:35.611511  140177 status.go:330] multinode-170011 host status = "Stopped" (err=<nil>)
	I1107 17:05:35.611537  140177 status.go:343] host is not running, skipping remaining checks
	I1107 17:05:35.611546  140177 status.go:257] multinode-170011 status: &{Name:multinode-170011 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 17:05:35.611585  140177 status.go:255] checking status of multinode-170011-m02 ...
	I1107 17:05:35.611811  140177 cli_runner.go:164] Run: docker container inspect multinode-170011-m02 --format={{.State.Status}}
	I1107 17:05:35.635711  140177 status.go:330] multinode-170011-m02 host status = "Stopped" (err=<nil>)
	I1107 17:05:35.635742  140177 status.go:343] host is not running, skipping remaining checks
	I1107 17:05:35.635751  140177 status.go:257] multinode-170011-m02 status: &{Name:multinode-170011-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-170011 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1107 17:05:38.771407   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-170011 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m18.349159545s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-170011 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (28.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-170011
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-170011-m02 --driver=docker  --container-runtime=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-170011-m02 --driver=docker  --container-runtime=docker: exit status 14 (90.930298ms)

                                                
                                                
-- stdout --
	* [multinode-170011-m02] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-170011-m02' is duplicated with machine name 'multinode-170011-m02' in profile 'multinode-170011'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-170011-m03 --driver=docker  --container-runtime=docker
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-170011-m03 --driver=docker  --container-runtime=docker: (26.049447082s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-170011
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-170011: exit status 80 (349.955125ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-170011
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-170011-m03 already exists in multinode-170011-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-170011-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-170011-m03: (2.172712687s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (28.74s)

                                                
                                    
x
+
TestPreload (124.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-170727 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E1107 17:07:31.703979   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 17:07:48.390894   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-170727 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (54.803872967s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-170727 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-170727 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.6
E1107 17:08:54.752925   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
preload_test.go:67: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-170727 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.6: (1m6.000510407s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-170727 -- docker images
helpers_test.go:175: Cleaning up "test-preload-170727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-170727
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-170727: (2.508066992s)
--- PASS: TestPreload (124.57s)

                                                
                                    
x
+
TestScheduledStopUnix (100.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-170931 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-170931 --memory=2048 --driver=docker  --container-runtime=docker: (27.391892972s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-170931 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-170931 -n scheduled-stop-170931
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-170931 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-170931 --cancel-scheduled
E1107 17:10:11.089755   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-170931 -n scheduled-stop-170931
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-170931
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-170931 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-170931
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-170931: exit status 7 (100.342459ms)

                                                
                                                
-- stdout --
	scheduled-stop-170931
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-170931 -n scheduled-stop-170931
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-170931 -n scheduled-stop-170931: exit status 7 (93.778472ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-170931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-170931
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-170931: (1.725495229s)
--- PASS: TestScheduledStopUnix (100.87s)

                                                
                                    
x
+
TestSkaffold (55.76s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe202828380 version
skaffold_test.go:63: skaffold version: v2.0.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-171112 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-171112 --memory=2600 --driver=docker  --container-runtime=docker: (26.703039072s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:110: (dbg) Run:  /tmp/skaffold.exe202828380 run --minikube-profile skaffold-171112 --kube-context skaffold-171112 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /tmp/skaffold.exe202828380 run --minikube-profile skaffold-171112 --kube-context skaffold-171112 --status-check=true --port-forward=false --interactive=false: (15.975509874s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-69774974c6-8pfcl" [a4d433cf-3b14-460a-9fa8-fa14c9458eaa] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.01176267s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-78b75fdcd7-nlgs5" [dd2ef1d2-9314-4495-b834-4558bdf155a4] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006123949s
helpers_test.go:175: Cleaning up "skaffold-171112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-171112
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-171112: (2.440675911s)
--- PASS: TestSkaffold (55.76s)

                                                
                                    
x
+
TestInsufficientStorage (11.24s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-171208 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-171208 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.752708344s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"11acd0f6-0840-42d6-be29-5981a2cd9b28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-171208] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a865c5d9-c288-4cf6-a83b-7d63fed84909","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15310"}}
	{"specversion":"1.0","id":"f28abbe2-52ee-43e4-9de1-e05ebb5c15bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f0c12c6c-af0e-4ce5-9a8f-10569352e0b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig"}}
	{"specversion":"1.0","id":"7573101d-0c44-42f0-b143-1b4c9a4ddf8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube"}}
	{"specversion":"1.0","id":"5bae7d5f-1fb8-4b2d-a0c9-782c0f35715f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"59d612bd-98d9-41c9-a965-40d31c951d4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0e305388-d008-4e24-90f4-d446a1e951fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"118120cf-a879-419d-8ce9-39fa8eedd72e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e54e6456-0457-442c-b804-f5e3a348fd9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"78c64d29-e597-4d79-a583-4dca297a85e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-171208 in cluster insufficient-storage-171208","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5daa2d10-91c0-4076-8d91-18a6dd416f3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"93ebd9d3-bda3-46e2-84a4-11a04c84c0c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd9251a5-c110-47da-b207-acf31c6835b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-171208 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-171208 --output=json --layout=cluster: exit status 7 (344.43734ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-171208","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-171208","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 17:12:17.723474  178068 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-171208" does not appear in /home/jenkins/minikube-integration/15310-3679/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-171208 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-171208 --output=json --layout=cluster: exit status 7 (341.47331ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-171208","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-171208","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 17:12:18.066349  178179 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-171208" does not appear in /home/jenkins/minikube-integration/15310-3679/kubeconfig
	E1107 17:12:18.074842  178179 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/insufficient-storage-171208/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-171208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-171208
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-171208: (1.803057942s)
--- PASS: TestInsufficientStorage (11.24s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (71.5s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.9.0.1966166329.exe start -p running-upgrade-171507 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.9.0.1966166329.exe start -p running-upgrade-171507 --memory=2200 --vm-driver=docker  --container-runtime=docker: (50.953899728s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-171507 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-171507 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (17.875365606s)
helpers_test.go:175: Cleaning up "running-upgrade-171507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-171507
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-171507: (2.26824099s)
--- PASS: TestRunningBinaryUpgrade (71.50s)

                                                
                                    
x
+
TestKubernetesUpgrade (382.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171418 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-171418 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.477441368s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-171418

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-171418: (10.998727637s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-171418 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-171418 status --format={{.Host}}: exit status 7 (111.221777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171418 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1107 17:15:11.087753   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-171418 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m36.611416668s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-171418 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171418 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-171418 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (111.380507ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-171418] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-171418
	    minikube start -p kubernetes-upgrade-171418 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1714182 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-171418 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171418 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-171418 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.053162626s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-171418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-171418

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-171418: (2.609994282s)
--- PASS: TestKubernetesUpgrade (382.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (99.07s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.1532997871.exe start -p missing-upgrade-171351 --memory=2200 --driver=docker  --container-runtime=docker
E1107 17:14:11.435555   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.1532997871.exe start -p missing-upgrade-171351 --memory=2200 --driver=docker  --container-runtime=docker: (49.733408609s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-171351
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-171351: (1.730339354s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-171351
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-171351 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-171351 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.539552567s)
helpers_test.go:175: Cleaning up "missing-upgrade-171351" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-171351
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-171351: (2.554249069s)
--- PASS: TestMissingContainerUpgrade (99.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-171219 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-171219 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (127.973356ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-171219] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15310
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-171219 --driver=docker  --container-runtime=docker
E1107 17:12:31.704480   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 17:12:48.390892   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-171219 --driver=docker  --container-runtime=docker: (39.985926716s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-171219 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-171219 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-171219 --no-kubernetes --driver=docker  --container-runtime=docker: (13.182207479s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-171219 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-171219 status -o json: exit status 2 (468.3758ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-171219","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-171219
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-171219: (1.90236309s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-171219 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-171219 --no-kubernetes --driver=docker  --container-runtime=docker: (7.765747041s)
--- PASS: TestNoKubernetes/serial/Start (7.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-171219 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-171219 "sudo systemctl is-active --quiet service kubelet": exit status 1 (419.721785ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (5.653288531s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (6.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-171219
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-171219: (1.332737598s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-171219 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-171219 --driver=docker  --container-runtime=docker: (8.663449227s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-171219 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-171219 "sudo systemctl is-active --quiet service kubelet": exit status 1 (482.569652ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (81.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.9.0.3059921904.exe start -p stopped-upgrade-171343 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.9.0.3059921904.exe start -p stopped-upgrade-171343 --memory=2200 --vm-driver=docker  --container-runtime=docker: (49.12624695s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.9.0.3059921904.exe -p stopped-upgrade-171343 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.9.0.3059921904.exe -p stopped-upgrade-171343 stop: (12.494343244s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-171343 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-171343 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.537528405s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (81.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-171343
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-171343: (1.407419113s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                    
x
+
TestPause/serial/Start (94.81s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-171530 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-171530 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m34.809316215s)
--- PASS: TestPause/serial/Start (94.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-171300 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-171300 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: (1m21.619763264s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-171300 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker
E1107 17:16:34.131986   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
E1107 17:16:56.169956   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:16:56.175265   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:16:56.185510   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:16:56.205831   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:16:56.246087   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:16:56.326434   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:16:56.486847   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:16:56.807325   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:16:57.448151   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:16:58.728796   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:17:01.289671   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-171300 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: (53.010917198s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-sv2qw" [bc39df3e-dd67-44e4-847f-aa5037dc2046] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.0130979s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-171300 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-171300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-jnxrz" [1a690010-183a-4a2e-85ba-397709e25409] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-jnxrz" [1a690010-183a-4a2e-85ba-397709e25409] Running
E1107 17:17:31.704218   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006548039s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-171300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-171300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1107 17:17:37.132778   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-171300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (91.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-171301 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-171301 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: (1m31.185682076s)
--- PASS: TestNetworkPlugins/group/cilium/Start (91.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-171300 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-171300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-s58gz" [2d9ec6cf-1b30-49f1-b200-a832f5336378] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-s58gz" [2d9ec6cf-1b30-49f1-b200-a832f5336378] Running
E1107 17:17:48.390820   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.00798396s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-171300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-171300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-171300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-171300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.166503079s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (42.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p false-171300 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker
E1107 17:18:18.093454   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p false-171300 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (42.948929461s)
--- PASS: TestNetworkPlugins/group/false/Start (42.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-171300 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-171300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-rhblf" [f084d74f-88c3-40a2-8283-c4d6ed3d8399] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-rhblf" [f084d74f-88c3-40a2-8283-c4d6ed3d8399] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.007416576s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-4jwpg" [9d58f82d-5ef0-4a07-b928-a501af616051] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.01648358s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-171301 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (11.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-171301 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-ch7qk" [fa79a85d-b4ed-44f8-834e-8499d9582d6d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-ch7qk" [fa79a85d-b4ed-44f8-834e-8499d9582d6d] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.008027503s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (11.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-171301 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-171301 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-171301 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (44.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-171300 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker
E1107 17:19:40.014193   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-171300 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: (44.165430792s)
--- PASS: TestNetworkPlugins/group/bridge/Start (44.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-171300 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-171300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-52f9r" [33f17133-29d4-4fe4-8fad-7832d680d84c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-52f9r" [33f17133-29d4-4fe4-8fad-7832d680d84c] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.007719191s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (301.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-171300 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-171300 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (5m1.502085983s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (301.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (40.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-171300 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E1107 17:23:43.307993   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:23:45.500336   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:23:45.505623   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:23:45.515951   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:23:45.536298   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:23:45.576631   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:23:45.656965   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:23:45.817319   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:23:46.138213   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:23:46.779133   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:23:48.060211   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:23:50.621440   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-171300 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (40.211501305s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (40.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-171300 "pgrep -a kubelet"
E1107 17:24:21.270983   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-171300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-9l8qz" [80efb594-03ad-4860-bda2-4c81661b6253] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1107 17:24:26.463154   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-9l8qz" [80efb594-03ad-4860-bda2-4c81661b6253] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.006680475s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-171300 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-171300 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-hkk7v" [4e640ff6-541a-4578-9bbc-ef5caebf1df3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-hkk7v" [4e640ff6-541a-4578-9bbc-ef5caebf1df3] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006145083s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (115.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-172642 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-172642 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (1m55.379538593s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (115.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (308.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-172648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-172648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3: (5m8.906475723s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (308.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-172642 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [da48fb02-6c62-402e-a625-24233cec1d97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [da48fb02-6c62-402e-a625-24233cec1d97] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.011716045s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-172642 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-172642 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-172642 describe deploy/metrics-server -n kube-system
E1107 17:28:45.500430   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-172642 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-172642 --alsologtostderr -v=3: (10.824140473s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-172642 -n old-k8s-version-172642
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-172642 -n old-k8s-version-172642: exit status 7 (103.147118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-172642 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (394.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-172642 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-172642 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (6m34.547508814s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-172642 -n old-k8s-version-172642
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (394.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (301.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-173036 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3
E1107 17:30:36.309759   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:30:51.436101   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 17:30:56.790389   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-173036 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3: (5m1.919400274s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (301.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (289.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-173132 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3
E1107 17:31:37.750545   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:31:56.169687   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-173132 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3: (4m49.97734965s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (289.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-172648 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [99bdd075-8028-4ccc-801b-3d243c9b4a97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [99bdd075-8028-4ccc-801b-3d243c9b4a97] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.011036746s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-172648 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-172648 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-172648 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-172648 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-172648 --alsologtostderr -v=3: (10.790245143s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-172648 -n no-preload-172648
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-172648 -n no-preload-172648: exit status 7 (105.826139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-172648 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (550.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-172648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3
E1107 17:32:21.384545   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kindnet-171300/client.crt: no such file or directory
E1107 17:32:31.704239   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/addons-164540/client.crt: no such file or directory
E1107 17:32:41.665076   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/auto-171300/client.crt: no such file or directory
E1107 17:32:48.390652   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/functional-165008/client.crt: no such file or directory
E1107 17:32:59.670854   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:33:14.132379   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
E1107 17:33:19.215273   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:33:45.500433   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/false-171300/client.crt: no such file or directory
E1107 17:34:11.029077   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: no such file or directory
E1107 17:34:21.606875   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:34:21.612167   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:34:21.622491   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:34:21.642860   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:34:21.683319   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:34:21.763729   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:34:21.924567   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:34:22.244919   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:34:22.885194   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:34:24.165428   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:34:26.726552   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:34:31.847298   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:34:42.087928   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:35:02.568577   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
E1107 17:35:11.087625   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
E1107 17:35:15.828040   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-172648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3: (9m10.501774198s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-172648 -n no-preload-172648
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (550.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-bl92h" [15ec588f-5d6f-47ce-8d15-ed2606009708] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011773271s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-bl92h" [15ec588f-5d6f-47ce-8d15-ed2606009708] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006397405s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-172642 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-173036 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [f4c6aaa5-b3be-48a4-8f24-65b1a72b5222] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [f4c6aaa5-b3be-48a4-8f24-65b1a72b5222] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.012038897s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-173036 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-172642 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-172642 --alsologtostderr -v=1
E1107 17:35:42.226576   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
E1107 17:35:42.231945   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
E1107 17:35:42.242320   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
E1107 17:35:42.262936   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
E1107 17:35:42.303279   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
E1107 17:35:42.384159   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
E1107 17:35:42.544356   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-172642 -n old-k8s-version-172642
E1107 17:35:42.864653   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-172642 -n old-k8s-version-172642: exit status 2 (381.310709ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-172642 -n old-k8s-version-172642
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-172642 -n old-k8s-version-172642: exit status 2 (387.068625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-172642 --alsologtostderr -v=1
E1107 17:35:43.504887   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
E1107 17:35:43.511312   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
E1107 17:35:43.529507   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-172642 -n old-k8s-version-172642
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-172642 -n old-k8s-version-172642
E1107 17:35:44.785855   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-173036 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-173036 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-173036 --alsologtostderr -v=3
E1107 17:35:47.346866   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-173036 --alsologtostderr -v=3: (10.827487689s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-173547 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3
E1107 17:35:52.467464   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-173547 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3: (39.751265995s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-173036 -n embed-certs-173036
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-173036 -n embed-certs-173036: exit status 7 (117.4783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-173036 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (549.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-173036 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3
E1107 17:36:02.708216   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-173036 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3: (9m9.31243643s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-173036 -n embed-certs-173036
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (549.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-173132 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [afeba237-8781-4d94-8749-f58365655f97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1107 17:36:23.189010   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
helpers_test.go:342: "busybox" [afeba237-8781-4d94-8749-f58365655f97] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.013010907s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-173132 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-173547 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-173547 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-173547 --alsologtostderr -v=3: (11.022323602s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-173132 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-173132 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-173132 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-173132 --alsologtostderr -v=3: (10.810338818s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-173547 -n newest-cni-173547
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-173547 -n newest-cni-173547: exit status 7 (109.905654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-173547 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-173547 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-173547 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3: (21.720482005s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-173547 -n newest-cni-173547
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-173132 -n default-k8s-diff-port-173132
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-173132 -n default-k8s-diff-port-173132: exit status 7 (105.148907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-173132 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (549.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-173132 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3
E1107 17:36:56.169511   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-173132 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.25.3: (9m9.339332967s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-173132 -n default-k8s-diff-port-173132
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (549.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-173547 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-173547 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-173547 -n newest-cni-173547
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-173547 -n newest-cni-173547: exit status 2 (476.211776ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-173547 -n newest-cni-173547
E1107 17:37:04.149970   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/enable-default-cni-171300/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-173547 -n newest-cni-173547: exit status 2 (451.014048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-173547 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-173547 -n newest-cni-173547
E1107 17:37:05.451208   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/kubenet-171300/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-173547 -n newest-cni-173547
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-vsz6p" [c0c64189-40b1-4a3e-8066-2f13213c75d8] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01415735s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-vsz6p" [c0c64189-40b1-4a3e-8066-2f13213c75d8] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006760549s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-172648 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-172648 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-172648 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-172648 -n no-preload-172648
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-172648 -n no-preload-172648: exit status 2 (397.120728ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-172648 -n no-preload-172648
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-172648 -n no-preload-172648: exit status 2 (410.910834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-172648 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-172648 -n no-preload-172648
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-172648 -n no-preload-172648
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-trzpq" [aa5edeae-6f26-4aa9-bc30-e3ed735169b6] Running
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-trzpq" [aa5edeae-6f26-4aa9-bc30-e3ed735169b6] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1107 17:45:11.087948   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/ingress-addon-legacy-165341/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011985055s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-trzpq" [aa5edeae-6f26-4aa9-bc30-e3ed735169b6] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1107 17:45:15.827936   10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/bridge-171300/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00746979s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-173036 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-173036 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-173036 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-173036 -n embed-certs-173036
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-173036 -n embed-certs-173036: exit status 2 (374.300825ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-173036 -n embed-certs-173036
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-173036 -n embed-certs-173036: exit status 2 (377.650036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-173036 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-173036 -n embed-certs-173036
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-173036 -n embed-certs-173036
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-gn5v9" [43dc0268-65d5-4b0e-b21f-63ec18fb0325] Running
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-gn5v9" [43dc0268-65d5-4b0e-b21f-63ec18fb0325] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012089889s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-gn5v9" [43dc0268-65d5-4b0e-b21f-63ec18fb0325] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007082778s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-173132 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-173132 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-173132 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-173132 -n default-k8s-diff-port-173132
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-173132 -n default-k8s-diff-port-173132: exit status 2 (372.048903ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-173132 -n default-k8s-diff-port-173132
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-173132 -n default-k8s-diff-port-173132: exit status 2 (361.576938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-173132 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-173132 -n default-k8s-diff-port-173132
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-173132 -n default-k8s-diff-port-173132
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.96s)

                                                
                                    

Test skip (19/277)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:451: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-171300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-171300
--- SKIP: TestNetworkPlugins/group/flannel (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-171300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-171300
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.22s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-173131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-173131
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard