Test Report: Docker_Linux 15770

                    
                      c18687863e947329a019937a2709fbcc4c6cf8b9:2023-02-03:27723
                    
                

Test fail (2/302)

Order failed test Duration
18 TestDownloadOnlyKic 1.65
258 TestPause/serial/SecondStartNoReconfiguration 78.44
x
+
TestDownloadOnlyKic (1.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-926287 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 start --download-only -p download-docker-926287 --force --alsologtostderr --driver=docker  --container-runtime=docker: exit status 14 (408.907655ms)

                                                
                                                
-- stdout --
	* [download-docker-926287] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15770
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 22:08:25.013533  651260 out.go:296] Setting OutFile to fd 1 ...
	I0203 22:08:25.013834  651260 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:08:25.013845  651260 out.go:309] Setting ErrFile to fd 2...
	I0203 22:08:25.013849  651260 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:08:25.013973  651260 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15770-643340/.minikube/bin
	I0203 22:08:25.014554  651260 out.go:303] Setting JSON to false
	I0203 22:08:25.015408  651260 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6655,"bootTime":1675455450,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 22:08:25.015476  651260 start.go:135] virtualization: kvm guest
	I0203 22:08:25.018417  651260 out.go:177] * [download-docker-926287] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 22:08:25.020259  651260 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 22:08:25.020286  651260 notify.go:220] Checking for updates...
	I0203 22:08:25.023748  651260 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 22:08:25.025709  651260 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	I0203 22:08:25.027348  651260 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	I0203 22:08:25.029039  651260 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	W0203 22:08:25.030480  651260 out.go:239] ! minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	! minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0203 22:08:25.030683  651260 driver.go:365] Setting default libvirt URI to qemu:///system
	W0203 22:08:25.098003  651260 docker.go:114] docker version returned error: exit status 1
	I0203 22:08:25.100788  651260 out.go:177] * Using the docker driver based on user configuration
	I0203 22:08:25.102660  651260 start.go:296] selected driver: docker
	I0203 22:08:25.102701  651260 start.go:857] validating driver "docker" against <nil>
	I0203 22:08:25.102778  651260 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	W0203 22:08:25.222119  651260 info.go:262] unmarshal docker info: parsing time "\"\"" as "\"2006-01-02T15:04:05Z07:00\"": cannot parse "\"" as "2006"
	I0203 22:08:25.222232  651260 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	W0203 22:08:25.343875  651260 info.go:262] unmarshal docker info: parsing time "\"\"" as "\"2006-01-02T15:04:05Z07:00\"": cannot parse "\"" as "2006"
	I0203 22:08:25.346557  651260 out.go:177] 
	W0203 22:08:25.348569  651260 out.go:239] X Exiting due to MK_USAGE: Ensure your Docker is running and is healthy.
	X Exiting due to MK_USAGE: Ensure your Docker is running and is healthy.
	I0203 22:08:25.350488  651260 out.go:177] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:229: start with download only failed ["start" "--download-only" "-p" "download-docker-926287" "--force" "--alsologtostderr" "--driver=docker" "" "--container-runtime=docker"] : exit status 14
helpers_test.go:175: Cleaning up "download-docker-926287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-926287
--- FAIL: TestDownloadOnlyKic (1.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (78.44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-868256 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-868256 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m12.200609597s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-868256] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15770
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-868256 in cluster pause-868256
	* Pulling base image ...
	* Updating the running docker "pause-868256" container ...
	* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Done! kubectl is now configured to use "pause-868256" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 22:37:06.452655  979588 out.go:296] Setting OutFile to fd 1 ...
	I0203 22:37:06.452948  979588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:37:06.452962  979588 out.go:309] Setting ErrFile to fd 2...
	I0203 22:37:06.452969  979588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:37:06.453158  979588 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15770-643340/.minikube/bin
	I0203 22:37:06.453995  979588 out.go:303] Setting JSON to false
	I0203 22:37:06.456409  979588 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8377,"bootTime":1675455450,"procs":930,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 22:37:06.456507  979588 start.go:135] virtualization: kvm guest
	I0203 22:37:06.460350  979588 out.go:177] * [pause-868256] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 22:37:06.462720  979588 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 22:37:06.463015  979588 notify.go:220] Checking for updates...
	I0203 22:37:06.467384  979588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 22:37:06.469586  979588 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	I0203 22:37:06.473340  979588 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	I0203 22:37:06.476959  979588 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 22:37:06.478771  979588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 22:37:06.481222  979588 config.go:180] Loaded profile config "pause-868256": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:37:06.481988  979588 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 22:37:06.577109  979588 docker.go:141] docker version: linux-23.0.0:Docker Engine - Community
	I0203 22:37:06.577220  979588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 22:37:06.762093  979588 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:70 SystemTime:2023-02-03 22:37:06.749544509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:31aa4358a36870b21a992d3ad2bef29e1d693bec Expected:31aa4358a36870b21a992d3ad2bef29e1d693bec} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 22:37:06.762191  979588 docker.go:282] overlay module found
	I0203 22:37:06.766036  979588 out.go:177] * Using the docker driver based on existing profile
	I0203 22:37:06.767630  979588 start.go:296] selected driver: docker
	I0203 22:37:06.767664  979588 start.go:857] validating driver "docker" against &{Name:pause-868256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:pause-868256 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 22:37:06.767802  979588 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 22:37:06.767906  979588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 22:37:06.922155  979588 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:70 SystemTime:2023-02-03 22:37:06.913252056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:31aa4358a36870b21a992d3ad2bef29e1d693bec Expected:31aa4358a36870b21a992d3ad2bef29e1d693bec} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 22:37:06.922722  979588 cni.go:84] Creating CNI manager for ""
	I0203 22:37:06.922740  979588 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 22:37:06.922748  979588 start_flags.go:319] config:
	{Name:pause-868256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:pause-868256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP:}
	I0203 22:37:06.925068  979588 out.go:177] * Starting control plane node pause-868256 in cluster pause-868256
	I0203 22:37:06.926706  979588 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 22:37:06.928584  979588 out.go:177] * Pulling base image ...
	I0203 22:37:06.930528  979588 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 22:37:06.930575  979588 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 22:37:06.930594  979588 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15770-643340/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0203 22:37:06.930606  979588 cache.go:57] Caching tarball of preloaded images
	I0203 22:37:06.930718  979588 preload.go:174] Found /home/jenkins/minikube-integration/15770-643340/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 22:37:06.930737  979588 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0203 22:37:06.930885  979588 profile.go:148] Saving config to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/config.json ...
	I0203 22:37:07.027748  979588 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon, skipping pull
	I0203 22:37:07.027777  979588 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 exists in daemon, skipping load
	I0203 22:37:07.027800  979588 cache.go:193] Successfully downloaded all kic artifacts
	I0203 22:37:07.027845  979588 start.go:364] acquiring machines lock for pause-868256: {Name:mk60fd01d4905f76fc52d88c5faf2c16c1215cb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 22:37:07.027949  979588 start.go:368] acquired machines lock for "pause-868256" in 63.696µs
	I0203 22:37:07.027976  979588 start.go:96] Skipping create...Using existing machine configuration
	I0203 22:37:07.027982  979588 fix.go:55] fixHost starting: 
	I0203 22:37:07.028293  979588 cli_runner.go:164] Run: docker container inspect pause-868256 --format={{.State.Status}}
	I0203 22:37:07.142012  979588 fix.go:103] recreateIfNeeded on pause-868256: state=Running err=<nil>
	W0203 22:37:07.142054  979588 fix.go:129] unexpected machine state, will restart: <nil>
	I0203 22:37:07.144756  979588 out.go:177] * Updating the running docker "pause-868256" container ...
	I0203 22:37:07.146606  979588 machine.go:88] provisioning docker machine ...
	I0203 22:37:07.146649  979588 ubuntu.go:169] provisioning hostname "pause-868256"
	I0203 22:37:07.146726  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:37:07.272454  979588 main.go:141] libmachine: Using SSH client type: native
	I0203 22:37:07.272701  979588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33311 <nil> <nil>}
	I0203 22:37:07.272722  979588 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-868256 && echo "pause-868256" | sudo tee /etc/hostname
	I0203 22:37:07.430464  979588 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-868256
	
	I0203 22:37:07.430538  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:37:07.555053  979588 main.go:141] libmachine: Using SSH client type: native
	I0203 22:37:07.555261  979588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33311 <nil> <nil>}
	I0203 22:37:07.555285  979588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-868256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-868256/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-868256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 22:37:07.705268  979588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 22:37:07.705308  979588 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15770-643340/.minikube CaCertPath:/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15770-643340/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15770-643340/.minikube}
	I0203 22:37:07.705335  979588 ubuntu.go:177] setting up certificates
	I0203 22:37:07.705344  979588 provision.go:83] configureAuth start
	I0203 22:37:07.705399  979588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-868256
	I0203 22:37:07.806934  979588 provision.go:138] copyHostCerts
	I0203 22:37:07.807002  979588 exec_runner.go:144] found /home/jenkins/minikube-integration/15770-643340/.minikube/cert.pem, removing ...
	I0203 22:37:07.807012  979588 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15770-643340/.minikube/cert.pem
	I0203 22:37:07.807084  979588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15770-643340/.minikube/cert.pem (1123 bytes)
	I0203 22:37:07.807203  979588 exec_runner.go:144] found /home/jenkins/minikube-integration/15770-643340/.minikube/key.pem, removing ...
	I0203 22:37:07.807217  979588 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15770-643340/.minikube/key.pem
	I0203 22:37:07.807250  979588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15770-643340/.minikube/key.pem (1679 bytes)
	I0203 22:37:07.807311  979588 exec_runner.go:144] found /home/jenkins/minikube-integration/15770-643340/.minikube/ca.pem, removing ...
	I0203 22:37:07.807322  979588 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15770-643340/.minikube/ca.pem
	I0203 22:37:07.807350  979588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15770-643340/.minikube/ca.pem (1082 bytes)
	I0203 22:37:07.807402  979588 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15770-643340/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca-key.pem org=jenkins.pause-868256 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube pause-868256]
	I0203 22:37:08.031492  979588 provision.go:172] copyRemoteCerts
	I0203 22:37:08.031568  979588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 22:37:08.031682  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:37:08.137766  979588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/pause-868256/id_rsa Username:docker}
	I0203 22:37:08.241754  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 22:37:08.273220  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0203 22:37:08.300516  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 22:37:08.322998  979588 provision.go:86] duration metric: configureAuth took 617.632628ms
	I0203 22:37:08.323031  979588 ubuntu.go:193] setting minikube options for container-runtime
	I0203 22:37:08.323236  979588 config.go:180] Loaded profile config "pause-868256": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:37:08.323288  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:37:08.431133  979588 main.go:141] libmachine: Using SSH client type: native
	I0203 22:37:08.431363  979588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33311 <nil> <nil>}
	I0203 22:37:08.431386  979588 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 22:37:08.578182  979588 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0203 22:37:08.578208  979588 ubuntu.go:71] root file system type: overlay
	I0203 22:37:08.578416  979588 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 22:37:08.578482  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:37:08.672509  979588 main.go:141] libmachine: Using SSH client type: native
	I0203 22:37:08.672717  979588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33311 <nil> <nil>}
	I0203 22:37:08.672811  979588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 22:37:08.822303  979588 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 22:37:08.822381  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:37:08.921728  979588 main.go:141] libmachine: Using SSH client type: native
	I0203 22:37:08.921940  979588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33311 <nil> <nil>}
	I0203 22:37:08.921971  979588 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 22:37:09.064958  979588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 22:37:09.064986  979588 machine.go:91] provisioned docker machine in 1.918355702s
	I0203 22:37:09.064997  979588 start.go:300] post-start starting for "pause-868256" (driver="docker")
	I0203 22:37:09.065005  979588 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 22:37:09.065076  979588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 22:37:09.065147  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:37:09.162255  979588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/pause-868256/id_rsa Username:docker}
	I0203 22:37:09.260781  979588 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 22:37:09.264349  979588 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0203 22:37:09.264378  979588 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0203 22:37:09.264391  979588 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0203 22:37:09.264398  979588 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0203 22:37:09.264410  979588 filesync.go:126] Scanning /home/jenkins/minikube-integration/15770-643340/.minikube/addons for local assets ...
	I0203 22:37:09.264471  979588 filesync.go:126] Scanning /home/jenkins/minikube-integration/15770-643340/.minikube/files for local assets ...
	I0203 22:37:09.264559  979588 filesync.go:149] local asset: /home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/6500652.pem -> 6500652.pem in /etc/ssl/certs
	I0203 22:37:09.264641  979588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 22:37:09.273787  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/6500652.pem --> /etc/ssl/certs/6500652.pem (1708 bytes)
	I0203 22:37:09.293067  979588 start.go:303] post-start completed in 228.054318ms
	I0203 22:37:09.293197  979588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 22:37:09.293253  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:37:09.382192  979588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/pause-868256/id_rsa Username:docker}
	I0203 22:37:09.477454  979588 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0203 22:37:09.481699  979588 fix.go:57] fixHost completed within 2.453707462s
	I0203 22:37:09.481728  979588 start.go:83] releasing machines lock for "pause-868256", held for 2.45376667s
	I0203 22:37:09.481816  979588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-868256
	I0203 22:37:09.567241  979588 ssh_runner.go:195] Run: cat /version.json
	I0203 22:37:09.567299  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:37:09.567318  979588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 22:37:09.567392  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:37:09.656892  979588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/pause-868256/id_rsa Username:docker}
	I0203 22:37:09.662799  979588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/pause-868256/id_rsa Username:docker}
	I0203 22:37:09.751538  979588 ssh_runner.go:195] Run: systemctl --version
	I0203 22:37:09.791042  979588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 22:37:09.799801  979588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0203 22:37:09.820904  979588 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0203 22:37:09.821045  979588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 22:37:09.828891  979588 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0203 22:37:09.846710  979588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 22:37:09.855934  979588 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0203 22:37:09.855981  979588 start.go:483] detecting cgroup driver to use...
	I0203 22:37:09.856019  979588 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 22:37:09.856188  979588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 22:37:09.879449  979588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0203 22:37:09.890937  979588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 22:37:09.900847  979588 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 22:37:09.900927  979588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 22:37:09.915215  979588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 22:37:09.926486  979588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 22:37:09.938258  979588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 22:37:09.951007  979588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 22:37:09.962508  979588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 22:37:09.978926  979588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 22:37:09.987719  979588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 22:37:09.996638  979588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 22:37:10.146091  979588 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 22:37:15.355687  979588 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (5.2095534s)
	I0203 22:37:15.355720  979588 start.go:483] detecting cgroup driver to use...
	I0203 22:37:15.355754  979588 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 22:37:15.355802  979588 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 22:37:15.368794  979588 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0203 22:37:15.368852  979588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 22:37:15.380521  979588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 22:37:15.400594  979588 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 22:37:15.569429  979588 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 22:37:15.903636  979588 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 22:37:15.903669  979588 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0203 22:37:15.935023  979588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 22:37:16.069111  979588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 22:37:16.895302  979588 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 22:37:16.994086  979588 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 22:37:17.089516  979588 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 22:37:17.257492  979588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 22:37:17.537633  979588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 22:37:17.643280  979588 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 22:37:17.643358  979588 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 22:37:17.651741  979588 start.go:551] Will wait 60s for crictl version
	I0203 22:37:17.651820  979588 ssh_runner.go:195] Run: which crictl
	I0203 22:37:17.657605  979588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 22:37:17.951272  979588 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0203 22:37:17.951342  979588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 22:37:17.990926  979588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 22:37:18.029079  979588 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0203 22:37:18.029171  979588 cli_runner.go:164] Run: docker network inspect pause-868256 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0203 22:37:18.123809  979588 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0203 22:37:18.127490  979588 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 22:37:18.127554  979588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 22:37:18.165721  979588 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0203 22:37:18.165758  979588 docker.go:560] Images already preloaded, skipping extraction
	I0203 22:37:18.165821  979588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 22:37:18.203803  979588 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0203 22:37:18.203829  979588 cache_images.go:84] Images are preloaded, skipping loading
	I0203 22:37:18.203878  979588 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 22:37:18.346618  979588 cni.go:84] Creating CNI manager for ""
	I0203 22:37:18.346649  979588 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 22:37:18.346665  979588 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0203 22:37:18.346689  979588 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-868256 NodeName:pause-868256 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0203 22:37:18.346894  979588 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-868256"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 22:37:18.347017  979588 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-868256 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:pause-868256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0203 22:37:18.347082  979588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0203 22:37:18.359841  979588 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 22:37:18.359916  979588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 22:37:18.372908  979588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (444 bytes)
	I0203 22:37:18.396314  979588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 22:37:18.415504  979588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2088 bytes)
	I0203 22:37:18.440508  979588 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0203 22:37:18.449395  979588 certs.go:56] Setting up /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256 for IP: 192.168.85.2
	I0203 22:37:18.449426  979588 certs.go:186] acquiring lock for shared ca certs: {Name:mke70fce29a277706b809a1e09202f97eb3de8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:37:18.449569  979588 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15770-643340/.minikube/ca.key
	I0203 22:37:18.449602  979588 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15770-643340/.minikube/proxy-client-ca.key
	I0203 22:37:18.449673  979588 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.key
	I0203 22:37:18.449754  979588 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/apiserver.key.43b9df8c
	I0203 22:37:18.449794  979588 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/proxy-client.key
	I0203 22:37:18.449894  979588 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/650065.pem (1338 bytes)
	W0203 22:37:18.449917  979588 certs.go:397] ignoring /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/650065_empty.pem, impossibly tiny 0 bytes
	I0203 22:37:18.449924  979588 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca-key.pem (1679 bytes)
	I0203 22:37:18.449945  979588 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem (1082 bytes)
	I0203 22:37:18.449975  979588 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/cert.pem (1123 bytes)
	I0203 22:37:18.450011  979588 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/key.pem (1679 bytes)
	I0203 22:37:18.450068  979588 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/6500652.pem (1708 bytes)
	I0203 22:37:18.450981  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0203 22:37:18.485343  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 22:37:18.541186  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 22:37:18.562695  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 22:37:18.598320  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 22:37:18.660519  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0203 22:37:18.734047  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 22:37:18.776543  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0203 22:37:18.842161  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 22:37:18.873182  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/certs/650065.pem --> /usr/share/ca-certificates/650065.pem (1338 bytes)
	I0203 22:37:18.935529  979588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/6500652.pem --> /usr/share/ca-certificates/6500652.pem (1708 bytes)
	I0203 22:37:18.962666  979588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 22:37:19.001524  979588 ssh_runner.go:195] Run: openssl version
	I0203 22:37:19.007636  979588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/650065.pem && ln -fs /usr/share/ca-certificates/650065.pem /etc/ssl/certs/650065.pem"
	I0203 22:37:19.040705  979588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/650065.pem
	I0203 22:37:19.045539  979588 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb  3 22:12 /usr/share/ca-certificates/650065.pem
	I0203 22:37:19.045600  979588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/650065.pem
	I0203 22:37:19.053786  979588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/650065.pem /etc/ssl/certs/51391683.0"
	I0203 22:37:19.065419  979588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6500652.pem && ln -fs /usr/share/ca-certificates/6500652.pem /etc/ssl/certs/6500652.pem"
	I0203 22:37:19.076951  979588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6500652.pem
	I0203 22:37:19.084903  979588 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb  3 22:12 /usr/share/ca-certificates/6500652.pem
	I0203 22:37:19.084964  979588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6500652.pem
	I0203 22:37:19.090688  979588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6500652.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 22:37:19.099320  979588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 22:37:19.107686  979588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 22:37:19.111049  979588 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb  3 22:08 /usr/share/ca-certificates/minikubeCA.pem
	I0203 22:37:19.111103  979588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 22:37:19.138187  979588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 22:37:19.151600  979588 kubeadm.go:401] StartCluster: {Name:pause-868256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:pause-868256 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 22:37:19.151760  979588 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 22:37:19.188337  979588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 22:37:19.234919  979588 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0203 22:37:19.234961  979588 kubeadm.go:633] restartCluster start
	I0203 22:37:19.235011  979588 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0203 22:37:19.250766  979588 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0203 22:37:19.251789  979588 kubeconfig.go:92] found "pause-868256" server: "https://192.168.85.2:8443"
	I0203 22:37:19.253434  979588 kapi.go:59] client config for pause-868256: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.crt", KeyFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.key", CAFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1891540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 22:37:19.254136  979588 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0203 22:37:19.268046  979588 api_server.go:165] Checking apiserver status ...
	I0203 22:37:19.268126  979588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 22:37:19.286406  979588 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5591/cgroup
	I0203 22:37:19.299816  979588 api_server.go:181] apiserver freezer: "5:freezer:/docker/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23/kubepods/burstable/podf57fb0c2021a3c3d065df4c1c71081f9/b2e7a9f54a0419231060db56c69b54be5167fcfaddd3d1c6fa9c1b05363364fc"
	I0203 22:37:19.299930  979588 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23/kubepods/burstable/podf57fb0c2021a3c3d065df4c1c71081f9/b2e7a9f54a0419231060db56c69b54be5167fcfaddd3d1c6fa9c1b05363364fc/freezer.state
	I0203 22:37:19.335101  979588 api_server.go:203] freezer state: "THAWED"
	I0203 22:37:19.335192  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:24.336487  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0203 22:37:24.336550  979588 retry.go:31] will retry after 263.082536ms: state is "Stopped"
	I0203 22:37:24.599915  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:29.601059  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0203 22:37:29.601113  979588 retry.go:31] will retry after 381.329545ms: state is "Stopped"
	I0203 22:37:29.982636  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:34.985636  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0203 22:37:35.486430  979588 api_server.go:165] Checking apiserver status ...
	I0203 22:37:35.486525  979588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 22:37:35.496258  979588 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5591/cgroup
	I0203 22:37:35.504559  979588 api_server.go:181] apiserver freezer: "5:freezer:/docker/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23/kubepods/burstable/podf57fb0c2021a3c3d065df4c1c71081f9/b2e7a9f54a0419231060db56c69b54be5167fcfaddd3d1c6fa9c1b05363364fc"
	I0203 22:37:35.504627  979588 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23/kubepods/burstable/podf57fb0c2021a3c3d065df4c1c71081f9/b2e7a9f54a0419231060db56c69b54be5167fcfaddd3d1c6fa9c1b05363364fc/freezer.state
	I0203 22:37:35.512311  979588 api_server.go:203] freezer state: "THAWED"
	I0203 22:37:35.512456  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:39.529329  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:50808->192.168.85.2:8443: read: connection reset by peer
	I0203 22:37:39.529397  979588 retry.go:31] will retry after 242.214273ms: state is "Stopped"
	I0203 22:37:39.771732  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:39.772161  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:39.772196  979588 retry.go:31] will retry after 300.724609ms: state is "Stopped"
	I0203 22:37:40.073153  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:40.073605  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:40.073675  979588 retry.go:31] will retry after 427.113882ms: state is "Stopped"
	I0203 22:37:40.501060  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:40.501514  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:40.501559  979588 retry.go:31] will retry after 382.2356ms: state is "Stopped"
	I0203 22:37:40.883917  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:40.884520  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:40.884572  979588 retry.go:31] will retry after 505.529557ms: state is "Stopped"
	I0203 22:37:41.390185  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:41.390639  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:41.390689  979588 retry.go:31] will retry after 609.195524ms: state is "Stopped"
	I0203 22:37:42.000916  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:42.001520  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:42.001578  979588 retry.go:31] will retry after 858.741692ms: state is "Stopped"
	I0203 22:37:42.860459  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:42.860960  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:42.861006  979588 retry.go:31] will retry after 1.201160326s: state is "Stopped"
	I0203 22:37:44.063148  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:44.063665  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:44.063715  979588 retry.go:31] will retry after 1.723796097s: state is "Stopped"
	I0203 22:37:45.787643  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:45.788025  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:45.788074  979588 retry.go:31] will retry after 1.596532639s: state is "Stopped"
	I0203 22:37:47.385464  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:47.385922  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:47.385963  979588 retry.go:31] will retry after 2.189373114s: state is "Stopped"
	I0203 22:37:49.575893  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:49.576333  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:49.576374  979588 api_server.go:165] Checking apiserver status ...
	I0203 22:37:49.576407  979588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 22:37:49.587159  979588 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 22:37:49.587191  979588 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0203 22:37:49.587197  979588 kubeadm.go:1120] stopping kube-system containers ...
	I0203 22:37:49.587255  979588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 22:37:49.628159  979588 docker.go:456] Stopping containers: [dedaef110fce 65f599eb0eeb 1a8a12bf42f5 0364f8ab712b 900b5dd1be8e e5363a998cd8 b2e7a9f54a04 d0ec4fe6e67f 8f69d29f7923 22bb57467f24 c638c5348fb7 36c6f2ce6f7a ec9da7ff44bd 398c93842956 1d18c29c2599 8e6a7f356be8 4ca8e7d5e7bf 1376f23256fa abc310bef32c 8e3432e0704c 219395fd7579 947eaea538a8 5f7717635d24 56078fc3c5c8 0ecd3de01f82 9a2d067eb6a8 dc2da3996611 fcd73dced5ec 38cacd98d247]
	I0203 22:37:49.628253  979588 ssh_runner.go:195] Run: docker stop dedaef110fce 65f599eb0eeb 1a8a12bf42f5 0364f8ab712b 900b5dd1be8e e5363a998cd8 b2e7a9f54a04 d0ec4fe6e67f 8f69d29f7923 22bb57467f24 c638c5348fb7 36c6f2ce6f7a ec9da7ff44bd 398c93842956 1d18c29c2599 8e6a7f356be8 4ca8e7d5e7bf 1376f23256fa abc310bef32c 8e3432e0704c 219395fd7579 947eaea538a8 5f7717635d24 56078fc3c5c8 0ecd3de01f82 9a2d067eb6a8 dc2da3996611 fcd73dced5ec 38cacd98d247
	I0203 22:37:54.810348  979588 ssh_runner.go:235] Completed: docker stop dedaef110fce 65f599eb0eeb 1a8a12bf42f5 0364f8ab712b 900b5dd1be8e e5363a998cd8 b2e7a9f54a04 d0ec4fe6e67f 8f69d29f7923 22bb57467f24 c638c5348fb7 36c6f2ce6f7a ec9da7ff44bd 398c93842956 1d18c29c2599 8e6a7f356be8 4ca8e7d5e7bf 1376f23256fa abc310bef32c 8e3432e0704c 219395fd7579 947eaea538a8 5f7717635d24 56078fc3c5c8 0ecd3de01f82 9a2d067eb6a8 dc2da3996611 fcd73dced5ec 38cacd98d247: (5.182055717s)
	I0203 22:37:54.810423  979588 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0203 22:37:54.897247  979588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 22:37:54.906084  979588 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb  3 22:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb  3 22:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Feb  3 22:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb  3 22:36 /etc/kubernetes/scheduler.conf
	
	I0203 22:37:54.906148  979588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 22:37:54.914111  979588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 22:37:54.922645  979588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 22:37:54.930848  979588 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0203 22:37:54.930910  979588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 22:37:54.939271  979588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 22:37:54.948889  979588 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0203 22:37:54.948952  979588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 22:37:54.958596  979588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 22:37:54.970088  979588 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0203 22:37:54.970120  979588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 22:37:55.041194  979588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 22:37:55.581478  979588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0203 22:37:55.756427  979588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 22:37:55.821027  979588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0203 22:37:55.939663  979588 api_server.go:51] waiting for apiserver process to appear ...
	I0203 22:37:55.939740  979588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 22:37:56.455678  979588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 22:37:56.959678  979588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 22:37:57.043619  979588 api_server.go:71] duration metric: took 1.103954077s to wait for apiserver process to appear ...
	I0203 22:37:57.043648  979588 api_server.go:87] waiting for apiserver healthz status ...
	I0203 22:37:57.043662  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:37:57.044086  979588 api_server.go:268] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:57.544759  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:38:00.617064  979588 api_server.go:278] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 22:38:00.617089  979588 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 22:38:01.044354  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:38:01.048501  979588 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 22:38:01.048528  979588 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 22:38:01.544217  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:38:01.549580  979588 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 22:38:01.549622  979588 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 22:38:02.045029  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:38:02.050008  979588 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0203 22:38:02.057112  979588 api_server.go:140] control plane version: v1.26.1
	I0203 22:38:02.057145  979588 api_server.go:130] duration metric: took 5.013490141s to wait for apiserver health ...
	I0203 22:38:02.057158  979588 cni.go:84] Creating CNI manager for ""
	I0203 22:38:02.057176  979588 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 22:38:02.060423  979588 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0203 22:38:02.064102  979588 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0203 22:38:02.074550  979588 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0203 22:38:02.091209  979588 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 22:38:02.102275  979588 system_pods.go:59] 6 kube-system pods found
	I0203 22:38:02.102317  979588 system_pods.go:61] "coredns-787d4945fb-dd5vv" [aed13814-af10-4c1c-9548-20630079cd3c] Running
	I0203 22:38:02.102329  979588 system_pods.go:61] "etcd-pause-868256" [595c2af1-3166-4d59-969d-bc282f646ed5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 22:38:02.102339  979588 system_pods.go:61] "kube-apiserver-pause-868256" [2a4d2f7d-025b-47a7-99c5-70079183e798] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0203 22:38:02.102349  979588 system_pods.go:61] "kube-controller-manager-pause-868256" [8f8a526f-dea5-4e08-8258-dd4e4654ae32] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 22:38:02.102366  979588 system_pods.go:61] "kube-proxy-6q8r8" [a9c6e5f1-fd98-4bc1-aae7-b0485f877616] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0203 22:38:02.102378  979588 system_pods.go:61] "kube-scheduler-pause-868256" [72bf6a79-cdaf-46bb-93fc-f8d402880694] Running
	I0203 22:38:02.102386  979588 system_pods.go:74] duration metric: took 11.150861ms to wait for pod list to return data ...
	I0203 22:38:02.102398  979588 node_conditions.go:102] verifying NodePressure condition ...
	I0203 22:38:02.106217  979588 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0203 22:38:02.106247  979588 node_conditions.go:123] node cpu capacity is 8
	I0203 22:38:02.106260  979588 node_conditions.go:105] duration metric: took 3.856582ms to run NodePressure ...
	I0203 22:38:02.106283  979588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 22:38:02.357304  979588 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0203 22:38:02.362615  979588 kubeadm.go:784] kubelet initialised
	I0203 22:38:02.362643  979588 kubeadm.go:785] duration metric: took 5.310734ms waiting for restarted kubelet to initialise ...
	I0203 22:38:02.362654  979588 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:02.370043  979588 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:02.376254  979588 pod_ready.go:92] pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:02.376290  979588 pod_ready.go:81] duration metric: took 6.215526ms waiting for pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:02.376304  979588 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:04.391744  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:06.392746  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:09.013459  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:11.392017  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:13.393177  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:14.892284  979588 pod_ready.go:92] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.892315  979588 pod_ready.go:81] duration metric: took 12.516003387s waiting for pod "etcd-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.892325  979588 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.896342  979588 pod_ready.go:92] pod "kube-apiserver-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.896361  979588 pod_ready.go:81] duration metric: took 4.029948ms waiting for pod "kube-apiserver-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.896372  979588 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.900459  979588 pod_ready.go:92] pod "kube-controller-manager-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.900476  979588 pod_ready.go:81] duration metric: took 4.097977ms waiting for pod "kube-controller-manager-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.900488  979588 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6q8r8" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.904379  979588 pod_ready.go:92] pod "kube-proxy-6q8r8" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.904395  979588 pod_ready.go:81] duration metric: took 3.900784ms waiting for pod "kube-proxy-6q8r8" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.904404  979588 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.908308  979588 pod_ready.go:92] pod "kube-scheduler-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.908329  979588 pod_ready.go:81] duration metric: took 3.918339ms waiting for pod "kube-scheduler-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.908336  979588 pod_ready.go:38] duration metric: took 12.545672865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:14.908355  979588 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 22:38:14.915923  979588 ops.go:34] apiserver oom_adj: -16
	I0203 22:38:14.915946  979588 kubeadm.go:637] restartCluster took 55.680977837s
	I0203 22:38:14.915955  979588 kubeadm.go:403] StartCluster complete in 55.764379154s
	I0203 22:38:14.915973  979588 settings.go:142] acquiring lock: {Name:mkf92d82d8749aa11cbf8d7cc1c5c387b3a944f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:14.916045  979588 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15770-643340/kubeconfig
	I0203 22:38:14.917278  979588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/kubeconfig: {Name:mk7b0a220bbb894990ed89116f6b1e42d435549f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:14.917594  979588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0203 22:38:14.917805  979588 config.go:180] Loaded profile config "pause-868256": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:38:14.917754  979588 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0203 22:38:14.917856  979588 addons.go:65] Setting storage-provisioner=true in profile "pause-868256"
	I0203 22:38:14.917860  979588 addons.go:65] Setting default-storageclass=true in profile "pause-868256"
	I0203 22:38:14.917878  979588 addons.go:227] Setting addon storage-provisioner=true in "pause-868256"
	I0203 22:38:14.917884  979588 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-868256"
	W0203 22:38:14.917890  979588 addons.go:236] addon storage-provisioner should already be in state true
	I0203 22:38:14.917954  979588 host.go:66] Checking if "pause-868256" exists ...
	I0203 22:38:14.918184  979588 cli_runner.go:164] Run: docker container inspect pause-868256 --format={{.State.Status}}
	I0203 22:38:14.918353  979588 cli_runner.go:164] Run: docker container inspect pause-868256 --format={{.State.Status}}
	I0203 22:38:14.918447  979588 kapi.go:59] client config for pause-868256: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.crt", KeyFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.key", CAFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1891540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 22:38:14.921602  979588 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-868256" context rescaled to 1 replicas
	I0203 22:38:14.921645  979588 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 22:38:14.924729  979588 out.go:177] * Verifying Kubernetes components...
	I0203 22:38:14.926880  979588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 22:38:15.030594  979588 node_ready.go:35] waiting up to 6m0s for node "pause-868256" to be "Ready" ...
	I0203 22:38:15.030687  979588 start.go:892] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0203 22:38:15.043534  979588 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 22:38:15.045509  979588 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 22:38:15.045533  979588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 22:38:15.045599  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:38:15.052408  979588 kapi.go:59] client config for pause-868256: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.crt", KeyFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.key", CAFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1891540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 22:38:15.055523  979588 addons.go:227] Setting addon default-storageclass=true in "pause-868256"
	W0203 22:38:15.055547  979588 addons.go:236] addon default-storageclass should already be in state true
	I0203 22:38:15.055578  979588 host.go:66] Checking if "pause-868256" exists ...
	I0203 22:38:15.056030  979588 cli_runner.go:164] Run: docker container inspect pause-868256 --format={{.State.Status}}
	I0203 22:38:15.095382  979588 node_ready.go:49] node "pause-868256" has status "Ready":"True"
	I0203 22:38:15.095405  979588 node_ready.go:38] duration metric: took 64.776768ms waiting for node "pause-868256" to be "Ready" ...
	I0203 22:38:15.095415  979588 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:15.170238  979588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/pause-868256/id_rsa Username:docker}
	I0203 22:38:15.180586  979588 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 22:38:15.180615  979588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 22:38:15.180676  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:38:15.281164  979588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/pause-868256/id_rsa Username:docker}
	I0203 22:38:15.282096  979588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 22:38:15.294779  979588 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:15.398048  979588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 22:38:15.690599  979588 pod_ready.go:92] pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:15.690623  979588 pod_ready.go:81] duration metric: took 395.806992ms waiting for pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:15.690637  979588 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.091573  979588 pod_ready.go:92] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:16.091597  979588 pod_ready.go:81] duration metric: took 400.951095ms waiting for pod "etcd-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.091610  979588 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.302870  979588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.020734682s)
	I0203 22:38:16.305381  979588 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0203 22:38:16.306921  979588 addons.go:492] enable addons completed in 1.389182055s: enabled=[storage-provisioner default-storageclass]
	I0203 22:38:16.490944  979588 pod_ready.go:92] pod "kube-apiserver-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:16.490967  979588 pod_ready.go:81] duration metric: took 399.350207ms waiting for pod "kube-apiserver-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.490977  979588 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.895568  979588 pod_ready.go:92] pod "kube-controller-manager-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:16.895592  979588 pod_ready.go:81] duration metric: took 404.606919ms waiting for pod "kube-controller-manager-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.895606  979588 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6q8r8" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:17.291424  979588 pod_ready.go:92] pod "kube-proxy-6q8r8" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:17.291453  979588 pod_ready.go:81] duration metric: took 395.838528ms waiting for pod "kube-proxy-6q8r8" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:17.291467  979588 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:17.690304  979588 pod_ready.go:92] pod "kube-scheduler-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:17.690326  979588 pod_ready.go:81] duration metric: took 398.850097ms waiting for pod "kube-scheduler-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:17.690333  979588 pod_ready.go:38] duration metric: took 2.59490922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:17.690353  979588 api_server.go:51] waiting for apiserver process to appear ...
	I0203 22:38:17.690389  979588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 22:38:17.700729  979588 api_server.go:71] duration metric: took 2.779046006s to wait for apiserver process to appear ...
	I0203 22:38:17.700770  979588 api_server.go:87] waiting for apiserver healthz status ...
	I0203 22:38:17.700785  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:38:17.705049  979588 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0203 22:38:17.706096  979588 api_server.go:140] control plane version: v1.26.1
	I0203 22:38:17.706119  979588 api_server.go:130] duration metric: took 5.342484ms to wait for apiserver health ...
	I0203 22:38:17.706130  979588 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 22:38:17.893912  979588 system_pods.go:59] 7 kube-system pods found
	I0203 22:38:17.893946  979588 system_pods.go:61] "coredns-787d4945fb-dd5vv" [aed13814-af10-4c1c-9548-20630079cd3c] Running
	I0203 22:38:17.893953  979588 system_pods.go:61] "etcd-pause-868256" [595c2af1-3166-4d59-969d-bc282f646ed5] Running
	I0203 22:38:17.893959  979588 system_pods.go:61] "kube-apiserver-pause-868256" [2a4d2f7d-025b-47a7-99c5-70079183e798] Running
	I0203 22:38:17.893966  979588 system_pods.go:61] "kube-controller-manager-pause-868256" [8f8a526f-dea5-4e08-8258-dd4e4654ae32] Running
	I0203 22:38:17.893972  979588 system_pods.go:61] "kube-proxy-6q8r8" [a9c6e5f1-fd98-4bc1-aae7-b0485f877616] Running
	I0203 22:38:17.893978  979588 system_pods.go:61] "kube-scheduler-pause-868256" [72bf6a79-cdaf-46bb-93fc-f8d402880694] Running
	I0203 22:38:17.893984  979588 system_pods.go:61] "storage-provisioner" [48da2fca-7198-449d-bebd-84e7ce3d61e0] Running
	I0203 22:38:17.893991  979588 system_pods.go:74] duration metric: took 187.854082ms to wait for pod list to return data ...
	I0203 22:38:17.894002  979588 default_sa.go:34] waiting for default service account to be created ...
	I0203 22:38:18.090160  979588 default_sa.go:45] found service account: "default"
	I0203 22:38:18.090187  979588 default_sa.go:55] duration metric: took 196.177872ms for default service account to be created ...
	I0203 22:38:18.090198  979588 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 22:38:18.293177  979588 system_pods.go:86] 7 kube-system pods found
	I0203 22:38:18.293208  979588 system_pods.go:89] "coredns-787d4945fb-dd5vv" [aed13814-af10-4c1c-9548-20630079cd3c] Running
	I0203 22:38:18.293216  979588 system_pods.go:89] "etcd-pause-868256" [595c2af1-3166-4d59-969d-bc282f646ed5] Running
	I0203 22:38:18.293224  979588 system_pods.go:89] "kube-apiserver-pause-868256" [2a4d2f7d-025b-47a7-99c5-70079183e798] Running
	I0203 22:38:18.293232  979588 system_pods.go:89] "kube-controller-manager-pause-868256" [8f8a526f-dea5-4e08-8258-dd4e4654ae32] Running
	I0203 22:38:18.293238  979588 system_pods.go:89] "kube-proxy-6q8r8" [a9c6e5f1-fd98-4bc1-aae7-b0485f877616] Running
	I0203 22:38:18.293244  979588 system_pods.go:89] "kube-scheduler-pause-868256" [72bf6a79-cdaf-46bb-93fc-f8d402880694] Running
	I0203 22:38:18.293251  979588 system_pods.go:89] "storage-provisioner" [48da2fca-7198-449d-bebd-84e7ce3d61e0] Running
	I0203 22:38:18.293262  979588 system_pods.go:126] duration metric: took 203.057207ms to wait for k8s-apps to be running ...
	I0203 22:38:18.293276  979588 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 22:38:18.293331  979588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 22:38:18.305047  979588 system_svc.go:56] duration metric: took 11.756989ms WaitForService to wait for kubelet.
	I0203 22:38:18.305082  979588 kubeadm.go:578] duration metric: took 3.383408222s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0203 22:38:18.305108  979588 node_conditions.go:102] verifying NodePressure condition ...
	I0203 22:38:18.491041  979588 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0203 22:38:18.491067  979588 node_conditions.go:123] node cpu capacity is 8
	I0203 22:38:18.491078  979588 node_conditions.go:105] duration metric: took 185.954067ms to run NodePressure ...
	I0203 22:38:18.491092  979588 start.go:228] waiting for startup goroutines ...
	I0203 22:38:18.491102  979588 start.go:233] waiting for cluster config update ...
	I0203 22:38:18.491115  979588 start.go:240] writing updated cluster config ...
	I0203 22:38:18.491444  979588 ssh_runner.go:195] Run: rm -f paused
	I0203 22:38:18.544355  979588 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0203 22:38:18.546936  979588 out.go:177] * Done! kubectl is now configured to use "pause-868256" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-868256
helpers_test.go:235: (dbg) docker inspect pause-868256:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23",
	        "Created": "2023-02-03T22:36:23.260201893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 957973,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T22:36:23.761992846Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f59734230331367fdba579a7224885a8ca1b2b3a1b0a3db04074b5e8b329b90",
	        "ResolvConfPath": "/var/lib/docker/containers/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23/hosts",
	        "LogPath": "/var/lib/docker/containers/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23-json.log",
	        "Name": "/pause-868256",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-868256:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-868256",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b7dd34239fcd3c6a2da0e0eb5ce926abfebc6db69c11302ab54c4ddcb876819-init/diff:/var/lib/docker/overlay2/0b475e32bad1f0dfced579ecb7b5cc72250aea7cec59e31a4743cd3a0d99e940/diff:/var/lib/docker/overlay2/aa2fe43966fc90171971fa0cf45ed489397176948a5d7e5c488c0895ea14fcf9/diff:/var/lib/docker/overlay2/d486d5af4f47c81a76d06ab38edcdd6e7c4c6d44bfccdebbbb9b1e69d39d2b05/diff:/var/lib/docker/overlay2/326412ac9c29a61ae48e2ee6d8d6f87ec6a4fd1bd6016dffb2811bfbfba591f9/diff:/var/lib/docker/overlay2/78f53b59df4fb8a2a788513fbe42773235fbfeeee25597b9ed08ab74e82151c2/diff:/var/lib/docker/overlay2/dd8122f0f83d412f78fbddee374294a4b80687e5536b80215002695f569198f2/diff:/var/lib/docker/overlay2/cc67dde78b4c1492ebd02fe71402ab41b661ee204fbde6d210cf8509387b098f/diff:/var/lib/docker/overlay2/a2b4916ad1fd3586e65047fb83df5d41ebcab71ac2ffa08b0e036e4678cb710a/diff:/var/lib/docker/overlay2/034739ad6486ba53fbfe3b3b421d15c6f3a0dd8fde3a43b07e103abff096d4f1/diff:/var/lib/docker/overlay2/307eda
d9ab61a3663c90810503decfdc670fe1869242a7f31075b6e59d76541a/diff:/var/lib/docker/overlay2/9c55defe4ce8df151985a8f224f3ed60b3859894f0e563ad67f2f4d1732230be/diff:/var/lib/docker/overlay2/e943e6cdbde9389f9a98c170180fedff4c2a9f95d9932705ca166be2d938da89/diff:/var/lib/docker/overlay2/cfdded024a919d0fb407d0de88be58a616371fce4c0976bd8002f580d767b842/diff:/var/lib/docker/overlay2/5d723f8d0c80d5508336518cd9b29f89acf16286d8ccdfb78feb1e37fe0bf064/diff:/var/lib/docker/overlay2/c47949bf11583f6ebcbf720cff56c46f781041344baa330c0bc5c1b61dad2f55/diff:/var/lib/docker/overlay2/27ad1f98760d8a67bd303c2b5611897e161a80beb6c7ed104208b48dd7b91379/diff:/var/lib/docker/overlay2/a0e957e1d2331cbc92f5a999b543942f2031b84ea47f403a499e7bef91d65899/diff:/var/lib/docker/overlay2/a229667103290aefe4a619724ab1234e77b9db8874253aa22c86042b8892c830/diff:/var/lib/docker/overlay2/467130c8e8a7564760c18a6fe07094da15434d5f1e474416b9572afe4b482f35/diff:/var/lib/docker/overlay2/cd5ca47a80e9064bab4601161848e63acf588fa9229e1174ab542acb88a97b16/diff:/var/lib/d
ocker/overlay2/a797536bd93f660222d6488b3f3ccb7d093128ad2c053b2e2be52eef7031bea6/diff:/var/lib/docker/overlay2/248250b521a0dd8701f96cf524c3c3739a1eff512d14fb30c74827361b312b32/diff:/var/lib/docker/overlay2/062e2ddeefb5ad4346bda8722a899aa52dda719d4249498404cb2d4892536de4/diff:/var/lib/docker/overlay2/fc997cd730a7dd26b34f6e703d278a224680a059dacf504900111dc9a025bbf0/diff:/var/lib/docker/overlay2/f577bb4339434ce3c9ded35d7cae363bc0f8211505f076fabb90fba761421598/diff:/var/lib/docker/overlay2/e8ac8d4860f647d09162c5b7a3176ddd3c2e474bbccd68be7c16766a7fd23cc3/diff:/var/lib/docker/overlay2/83c501c19fcfb1a35a817eaeeb945d275930e39c796dfc74152c43fdde79ab84/diff:/var/lib/docker/overlay2/0e920c20ffbb5e7feb23e6614ca1f2087335c096eb0309328a0689561d3a34b7/diff:/var/lib/docker/overlay2/fddb0961123e581f39614f85a12371d378053c880449edc8ef02b7b59d37acbd/diff:/var/lib/docker/overlay2/79a3dd2dc2deaed4119301832c81086def768bb1f385f355d4040d07da72699c/diff:/var/lib/docker/overlay2/d8ab98e1745fd7d47f1072f953123e3f453d00a4142308cac37c683e7e2
15755/diff:/var/lib/docker/overlay2/cf689ce035c88cc3cd979840cd72f78a9a4dcc62b2908837d83e705d0188a595/diff:/var/lib/docker/overlay2/f3ef7125ac2d8a6c9d2b633eb3fb34158b96a4639a2ef3d6d3bed8c91b5a6f2f/diff:/var/lib/docker/overlay2/e4e0e186cf2cf07dae99d67e45b1e480bbf4af91d131348c6d2124f0b201a650/diff:/var/lib/docker/overlay2/a50f9577818b2898c6d148599e38b6a88d0d80085a584bba96928c73f334cbcd/diff:/var/lib/docker/overlay2/2efc2fb2ee969b3eb5d1bde8184f7a96ea316eda6b6a74665936973ea3f3bd6b/diff:/var/lib/docker/overlay2/76cfcade4e4ca9badc64f6ada01efa5198447e393a87405de24b1418986c5e84/diff:/var/lib/docker/overlay2/503b12ed217c06e41cae8cd4644f7e70792be89545abf400682521433255eb6c/diff:/var/lib/docker/overlay2/f051a3728d7742609a1e79f10ecf540a169426957992d56f8c95311390abf08c/diff:/var/lib/docker/overlay2/dd655e28a7bca3a64c71fc29b401269c2f81b35cfcd5cdf0174304407eaf4433/diff:/var/lib/docker/overlay2/3b2197e4d79d675e680efd7a515dbd55aeac009a711fd6f0c3986eaa894c0e9d/diff:/var/lib/docker/overlay2/10aec7220005d1e9a6082e19fec2237d778d9d
752c48da1ce707c0001e09f158/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b7dd34239fcd3c6a2da0e0eb5ce926abfebc6db69c11302ab54c4ddcb876819/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b7dd34239fcd3c6a2da0e0eb5ce926abfebc6db69c11302ab54c4ddcb876819/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b7dd34239fcd3c6a2da0e0eb5ce926abfebc6db69c11302ab54c4ddcb876819/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-868256",
	                "Source": "/var/lib/docker/volumes/pause-868256/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-868256",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-868256",
	                "name.minikube.sigs.k8s.io": "pause-868256",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8a59c4ee03c9dc235af6991d7b1f1f46e8572118408bb5130b803ba9ad30e3f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33311"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33310"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33307"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33309"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33308"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a8a59c4ee03c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-868256": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a4c14c9470c0",
	                        "pause-868256"
	                    ],
	                    "NetworkID": "561f7143c42d387f8e4c8725c3705eedda5f89c460cd4b2e7e8dd55f7e009901",
	                    "EndpointID": "baf703b9428a8be97ace56ce7385a5313d5ed205e25bd4c2c491adcf4f056294",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-868256 -n pause-868256
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-868256 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-868256 logs -n 25: (1.557258551s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-955330          | kubernetes-upgrade-955330 | jenkins | v1.29.0 | 03 Feb 23 22:35 UTC | 03 Feb 23 22:35 UTC |
	| start   | -p force-systemd-env-432494           | force-systemd-env-432494  | jenkins | v1.29.0 | 03 Feb 23 22:35 UTC | 03 Feb 23 22:36 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-955330          | kubernetes-upgrade-955330 | jenkins | v1.29.0 | 03 Feb 23 22:35 UTC | 03 Feb 23 22:36 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-804939             | force-systemd-flag-804939 | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:36 UTC |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-804939          | force-systemd-flag-804939 | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:36 UTC |
	| start   | -p pause-868256 --memory=2048         | pause-868256              | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:37 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker            |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-955330          | kubernetes-upgrade-955330 | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-955330          | kubernetes-upgrade-955330 | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:36 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-432494              | force-systemd-env-432494  | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:36 UTC |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-432494           | force-systemd-env-432494  | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:36 UTC |
	| start   | -p cert-expiration-012867             | cert-expiration-012867    | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:37 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-955330          | kubernetes-upgrade-955330 | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:36 UTC |
	| start   | -p docker-flags-636731                | docker-flags-636731       | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:37 UTC |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p pause-868256                       | pause-868256              | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:38 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p running-upgrade-086031             | running-upgrade-086031    | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | docker-flags-636731 ssh               | docker-flags-636731       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-636731 ssh               | docker-flags-636731       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-636731                | docker-flags-636731       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	| delete  | -p running-upgrade-086031             | running-upgrade-086031    | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	| start   | -p cert-options-145838                | cert-options-145838       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p auto-770968 --memory=3072          | auto-770968               | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | cert-options-145838 ssh               | cert-options-145838       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-145838 -- sudo        | cert-options-145838       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-145838                | cert-options-145838       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:38 UTC |
	| start   | -p kindnet-770968                     | kindnet-770968            | jenkins | v1.29.0 | 03 Feb 23 22:38 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker         |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/03 22:38:02
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 22:38:02.961492 1004591 out.go:296] Setting OutFile to fd 1 ...
	I0203 22:38:02.961596 1004591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:38:02.961603 1004591 out.go:309] Setting ErrFile to fd 2...
	I0203 22:38:02.961608 1004591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:38:02.961719 1004591 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15770-643340/.minikube/bin
	I0203 22:38:02.962400 1004591 out.go:303] Setting JSON to false
	I0203 22:38:02.964255 1004591 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8433,"bootTime":1675455450,"procs":952,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 22:38:02.964369 1004591 start.go:135] virtualization: kvm guest
	I0203 22:38:02.967570 1004591 out.go:177] * [kindnet-770968] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 22:38:02.969472 1004591 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 22:38:02.969414 1004591 notify.go:220] Checking for updates...
	I0203 22:38:02.971069 1004591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 22:38:02.972916 1004591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	I0203 22:38:02.974648 1004591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	I0203 22:38:02.976527 1004591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 22:38:02.978300 1004591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 22:38:02.980613 1004591 config.go:180] Loaded profile config "auto-770968": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:38:02.980753 1004591 config.go:180] Loaded profile config "cert-expiration-012867": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:38:02.980876 1004591 config.go:180] Loaded profile config "pause-868256": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:38:02.980938 1004591 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 22:38:03.072418 1004591 docker.go:141] docker version: linux-23.0.0:Docker Engine - Community
	I0203 22:38:03.072528 1004591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 22:38:03.211307 1004591 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:47 SystemTime:2023-02-03 22:38:03.201040188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:31aa4358a36870b21a992d3ad2bef29e1d693bec Expected:31aa4358a36870b21a992d3ad2bef29e1d693bec} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 22:38:03.211421 1004591 docker.go:282] overlay module found
	I0203 22:38:03.214267 1004591 out.go:177] * Using the docker driver based on user configuration
	I0203 22:38:03.215944 1004591 start.go:296] selected driver: docker
	I0203 22:38:03.215978 1004591 start.go:857] validating driver "docker" against <nil>
	I0203 22:38:03.216006 1004591 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 22:38:03.216985 1004591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 22:38:03.351643 1004591 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:47 SystemTime:2023-02-03 22:38:03.342284737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:31aa4358a36870b21a992d3ad2bef29e1d693bec Expected:31aa4358a36870b21a992d3ad2bef29e1d693bec} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 22:38:03.351770 1004591 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0203 22:38:03.351973 1004591 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 22:38:03.354522 1004591 out.go:177] * Using Docker driver with root privileges
	I0203 22:38:03.357363 1004591 cni.go:84] Creating CNI manager for "kindnet"
	I0203 22:38:03.357396 1004591 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0203 22:38:03.357410 1004591 start_flags.go:319] config:
	{Name:kindnet-770968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kindnet-770968 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 22:38:03.359508 1004591 out.go:177] * Starting control plane node kindnet-770968 in cluster kindnet-770968
	I0203 22:38:03.361539 1004591 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 22:38:03.363451 1004591 out.go:177] * Pulling base image ...
	I0203 22:38:03.365291 1004591 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 22:38:03.365358 1004591 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15770-643340/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0203 22:38:03.365373 1004591 cache.go:57] Caching tarball of preloaded images
	I0203 22:38:03.365413 1004591 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 22:38:03.365489 1004591 preload.go:174] Found /home/jenkins/minikube-integration/15770-643340/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 22:38:03.365505 1004591 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0203 22:38:03.365725 1004591 profile.go:148] Saving config to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/config.json ...
	I0203 22:38:03.365764 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/config.json: {Name:mk5f9111854d4b577e0eaace8a28dd6870591f7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:03.440004 1004591 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon, skipping pull
	I0203 22:38:03.440032 1004591 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 exists in daemon, skipping load
	I0203 22:38:03.440056 1004591 cache.go:193] Successfully downloaded all kic artifacts
	I0203 22:38:03.440092 1004591 start.go:364] acquiring machines lock for kindnet-770968: {Name:mk4aa1a98cb1fcf6397c55c385c6f84ed8f4ce0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 22:38:03.440226 1004591 start.go:368] acquired machines lock for "kindnet-770968" in 111.78µs
	I0203 22:38:03.440262 1004591 start.go:93] Provisioning new machine with config: &{Name:kindnet-770968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kindnet-770968 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 22:38:03.440461 1004591 start.go:125] createHost starting for "" (driver="docker")
	I0203 22:38:02.064102  979588 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0203 22:38:02.074550  979588 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0203 22:38:02.091209  979588 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 22:38:02.102275  979588 system_pods.go:59] 6 kube-system pods found
	I0203 22:38:02.102317  979588 system_pods.go:61] "coredns-787d4945fb-dd5vv" [aed13814-af10-4c1c-9548-20630079cd3c] Running
	I0203 22:38:02.102329  979588 system_pods.go:61] "etcd-pause-868256" [595c2af1-3166-4d59-969d-bc282f646ed5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 22:38:02.102339  979588 system_pods.go:61] "kube-apiserver-pause-868256" [2a4d2f7d-025b-47a7-99c5-70079183e798] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0203 22:38:02.102349  979588 system_pods.go:61] "kube-controller-manager-pause-868256" [8f8a526f-dea5-4e08-8258-dd4e4654ae32] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 22:38:02.102366  979588 system_pods.go:61] "kube-proxy-6q8r8" [a9c6e5f1-fd98-4bc1-aae7-b0485f877616] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0203 22:38:02.102378  979588 system_pods.go:61] "kube-scheduler-pause-868256" [72bf6a79-cdaf-46bb-93fc-f8d402880694] Running
	I0203 22:38:02.102386  979588 system_pods.go:74] duration metric: took 11.150861ms to wait for pod list to return data ...
	I0203 22:38:02.102398  979588 node_conditions.go:102] verifying NodePressure condition ...
	I0203 22:38:02.106217  979588 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0203 22:38:02.106247  979588 node_conditions.go:123] node cpu capacity is 8
	I0203 22:38:02.106260  979588 node_conditions.go:105] duration metric: took 3.856582ms to run NodePressure ...
	I0203 22:38:02.106283  979588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 22:38:02.357304  979588 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0203 22:38:02.362615  979588 kubeadm.go:784] kubelet initialised
	I0203 22:38:02.362643  979588 kubeadm.go:785] duration metric: took 5.310734ms waiting for restarted kubelet to initialise ...
	I0203 22:38:02.362654  979588 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:02.370043  979588 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:02.376254  979588 pod_ready.go:92] pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:02.376290  979588 pod_ready.go:81] duration metric: took 6.215526ms waiting for pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:02.376304  979588 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:04.391744  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:06.392746  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:01.869330  993024 ops.go:34] apiserver oom_adj: -16
	I0203 22:38:01.869359  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:02.476387  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:02.976871  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:03.476460  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:03.976437  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:04.477266  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:04.977281  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:05.477172  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:05.976833  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:06.476388  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:03.443509 1004591 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0203 22:38:03.443770 1004591 start.go:159] libmachine.API.Create for "kindnet-770968" (driver="docker")
	I0203 22:38:03.443804 1004591 client.go:168] LocalClient.Create starting
	I0203 22:38:03.443914 1004591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem
	I0203 22:38:03.443950 1004591 main.go:141] libmachine: Decoding PEM data...
	I0203 22:38:03.443967 1004591 main.go:141] libmachine: Parsing certificate...
	I0203 22:38:03.444024 1004591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15770-643340/.minikube/certs/cert.pem
	I0203 22:38:03.444041 1004591 main.go:141] libmachine: Decoding PEM data...
	I0203 22:38:03.444050 1004591 main.go:141] libmachine: Parsing certificate...
	I0203 22:38:03.444446 1004591 cli_runner.go:164] Run: docker network inspect kindnet-770968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0203 22:38:03.519482 1004591 cli_runner.go:211] docker network inspect kindnet-770968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0203 22:38:03.519566 1004591 network_create.go:281] running [docker network inspect kindnet-770968] to gather additional debugging logs...
	I0203 22:38:03.519592 1004591 cli_runner.go:164] Run: docker network inspect kindnet-770968
	W0203 22:38:03.594705 1004591 cli_runner.go:211] docker network inspect kindnet-770968 returned with exit code 1
	I0203 22:38:03.594756 1004591 network_create.go:284] error running [docker network inspect kindnet-770968]: docker network inspect kindnet-770968: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-770968 not found
	I0203 22:38:03.594773 1004591 network_create.go:286] output of [docker network inspect kindnet-770968]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-770968 not found
	
	** /stderr **
	I0203 22:38:03.594851 1004591 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0203 22:38:03.670584 1004591 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-27eee80fa331 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:28:fa:75:ad} reservation:<nil>}
	I0203 22:38:03.671765 1004591 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-516c71c0568d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:3f:36:3a:09} reservation:<nil>}
	I0203 22:38:03.672868 1004591 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000c4c030}
	I0203 22:38:03.672891 1004591 network_create.go:123] attempt to create docker network kindnet-770968 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0203 22:38:03.672938 1004591 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-770968 kindnet-770968
	I0203 22:38:03.785661 1004591 network_create.go:107] docker network kindnet-770968 192.168.67.0/24 created
	I0203 22:38:03.785690 1004591 kic.go:117] calculated static IP "192.168.67.2" for the "kindnet-770968" container
	I0203 22:38:03.785746 1004591 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0203 22:38:03.866191 1004591 cli_runner.go:164] Run: docker volume create kindnet-770968 --label name.minikube.sigs.k8s.io=kindnet-770968 --label created_by.minikube.sigs.k8s.io=true
	I0203 22:38:03.943524 1004591 oci.go:103] Successfully created a docker volume kindnet-770968
	I0203 22:38:03.943609 1004591 cli_runner.go:164] Run: docker run --rm --name kindnet-770968-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-770968 --entrypoint /usr/bin/test -v kindnet-770968:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -d /var/lib
	I0203 22:38:04.653592 1004591 oci.go:107] Successfully prepared a docker volume kindnet-770968
	I0203 22:38:04.653672 1004591 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 22:38:04.653703 1004591 kic.go:190] Starting extracting preloaded images to volume ...
	I0203 22:38:04.653800 1004591 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15770-643340/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-770968:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0203 22:38:09.013459  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:11.392017  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:06.977114  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:07.476494  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:07.976424  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:08.476707  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:08.976968  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:09.476478  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:09.976259  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:10.477171  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:10.977273  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:11.477102  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:09.928372 1004591 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15770-643340/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-770968:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.27447243s)
	I0203 22:38:09.928408 1004591 kic.go:199] duration metric: took 5.274701 seconds to extract preloaded images to volume
	W0203 22:38:09.928565 1004591 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0203 22:38:09.928701 1004591 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0203 22:38:10.068005 1004591 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-770968 --name kindnet-770968 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-770968 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-770968 --network kindnet-770968 --ip 192.168.67.2 --volume kindnet-770968:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8
	I0203 22:38:10.570837 1004591 cli_runner.go:164] Run: docker container inspect kindnet-770968 --format={{.State.Running}}
	I0203 22:38:10.658192 1004591 cli_runner.go:164] Run: docker container inspect kindnet-770968 --format={{.State.Status}}
	I0203 22:38:10.731105 1004591 cli_runner.go:164] Run: docker exec kindnet-770968 stat /var/lib/dpkg/alternatives/iptables
	I0203 22:38:10.835591 1004591 oci.go:144] the created container "kindnet-770968" has a running status.
	I0203 22:38:10.835627 1004591 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa...
	I0203 22:38:11.045195 1004591 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0203 22:38:11.184176 1004591 cli_runner.go:164] Run: docker container inspect kindnet-770968 --format={{.State.Status}}
	I0203 22:38:11.261864 1004591 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0203 22:38:11.261893 1004591 kic_runner.go:114] Args: [docker exec --privileged kindnet-770968 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0203 22:38:11.394931 1004591 cli_runner.go:164] Run: docker container inspect kindnet-770968 --format={{.State.Status}}
	I0203 22:38:11.468436 1004591 machine.go:88] provisioning docker machine ...
	I0203 22:38:11.468520 1004591 ubuntu.go:169] provisioning hostname "kindnet-770968"
	I0203 22:38:11.468600 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:11.545680 1004591 main.go:141] libmachine: Using SSH client type: native
	I0203 22:38:11.545906 1004591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33336 <nil> <nil>}
	I0203 22:38:11.545929 1004591 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-770968 && echo "kindnet-770968" | sudo tee /etc/hostname
	I0203 22:38:11.690578 1004591 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-770968
	
	I0203 22:38:11.690678 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:11.763689 1004591 main.go:141] libmachine: Using SSH client type: native
	I0203 22:38:11.763865 1004591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33336 <nil> <nil>}
	I0203 22:38:11.763888 1004591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-770968' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-770968/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-770968' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 22:38:11.892595 1004591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 22:38:11.892629 1004591 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15770-643340/.minikube CaCertPath:/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15770-643340/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15770-643340/.minikube}
	I0203 22:38:11.892654 1004591 ubuntu.go:177] setting up certificates
	I0203 22:38:11.892665 1004591 provision.go:83] configureAuth start
	I0203 22:38:11.892726 1004591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-770968
	I0203 22:38:11.963212 1004591 provision.go:138] copyHostCerts
	I0203 22:38:11.963277 1004591 exec_runner.go:144] found /home/jenkins/minikube-integration/15770-643340/.minikube/ca.pem, removing ...
	I0203 22:38:11.963292 1004591 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15770-643340/.minikube/ca.pem
	I0203 22:38:11.963362 1004591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15770-643340/.minikube/ca.pem (1082 bytes)
	I0203 22:38:11.963457 1004591 exec_runner.go:144] found /home/jenkins/minikube-integration/15770-643340/.minikube/cert.pem, removing ...
	I0203 22:38:11.963466 1004591 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15770-643340/.minikube/cert.pem
	I0203 22:38:11.963488 1004591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15770-643340/.minikube/cert.pem (1123 bytes)
	I0203 22:38:11.963549 1004591 exec_runner.go:144] found /home/jenkins/minikube-integration/15770-643340/.minikube/key.pem, removing ...
	I0203 22:38:11.963556 1004591 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15770-643340/.minikube/key.pem
	I0203 22:38:11.963576 1004591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15770-643340/.minikube/key.pem (1679 bytes)
	I0203 22:38:11.963630 1004591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15770-643340/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca-key.pem org=jenkins.kindnet-770968 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-770968]
	I0203 22:38:12.292618 1004591 provision.go:172] copyRemoteCerts
	I0203 22:38:12.292682 1004591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 22:38:12.292731 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:12.364692 1004591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa Username:docker}
	I0203 22:38:12.456649 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 22:38:12.475748 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0203 22:38:12.495875 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 22:38:12.520098 1004591 provision.go:86] duration metric: configureAuth took 627.416996ms
	I0203 22:38:12.520131 1004591 ubuntu.go:193] setting minikube options for container-runtime
	I0203 22:38:12.520405 1004591 config.go:180] Loaded profile config "kindnet-770968": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:38:12.520477 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:12.597442 1004591 main.go:141] libmachine: Using SSH client type: native
	I0203 22:38:12.597639 1004591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33336 <nil> <nil>}
	I0203 22:38:12.597655 1004591 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 22:38:12.729193 1004591 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0203 22:38:12.729221 1004591 ubuntu.go:71] root file system type: overlay
	I0203 22:38:12.729450 1004591 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 22:38:12.729520 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:12.805224 1004591 main.go:141] libmachine: Using SSH client type: native
	I0203 22:38:12.805370 1004591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33336 <nil> <nil>}
	I0203 22:38:12.805430 1004591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 22:38:12.942364 1004591 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 22:38:12.942439 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:11.976458  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:12.476409  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:12.976521  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:13.476955  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:13.977055  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:14.148500  993024 kubeadm.go:1073] duration metric: took 12.710452798s to wait for elevateKubeSystemPrivileges.
	I0203 22:38:14.148538  993024 kubeadm.go:403] StartCluster complete in 27.462557503s
	I0203 22:38:14.148563  993024 settings.go:142] acquiring lock: {Name:mkf92d82d8749aa11cbf8d7cc1c5c387b3a944f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:14.148652  993024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15770-643340/kubeconfig
	I0203 22:38:14.150281  993024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/kubeconfig: {Name:mk7b0a220bbb894990ed89116f6b1e42d435549f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:14.151672  993024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0203 22:38:14.151961  993024 config.go:180] Loaded profile config "auto-770968": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:38:14.152014  993024 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0203 22:38:14.152086  993024 addons.go:65] Setting storage-provisioner=true in profile "auto-770968"
	I0203 22:38:14.152120  993024 addons.go:227] Setting addon storage-provisioner=true in "auto-770968"
	W0203 22:38:14.152128  993024 addons.go:236] addon storage-provisioner should already be in state true
	I0203 22:38:14.152178  993024 host.go:66] Checking if "auto-770968" exists ...
	I0203 22:38:14.152720  993024 addons.go:65] Setting default-storageclass=true in profile "auto-770968"
	I0203 22:38:14.152744  993024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-770968"
	I0203 22:38:14.152785  993024 cli_runner.go:164] Run: docker container inspect auto-770968 --format={{.State.Status}}
	I0203 22:38:14.153024  993024 cli_runner.go:164] Run: docker container inspect auto-770968 --format={{.State.Status}}
	I0203 22:38:14.249833  993024 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 22:38:14.251861  993024 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 22:38:14.251889  993024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 22:38:14.251955  993024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770968
	I0203 22:38:14.267192  993024 addons.go:227] Setting addon default-storageclass=true in "auto-770968"
	W0203 22:38:14.267227  993024 addons.go:236] addon default-storageclass should already be in state true
	I0203 22:38:14.267262  993024 host.go:66] Checking if "auto-770968" exists ...
	I0203 22:38:14.267778  993024 cli_runner.go:164] Run: docker container inspect auto-770968 --format={{.State.Status}}
	I0203 22:38:14.351780  993024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33331 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/auto-770968/id_rsa Username:docker}
	I0203 22:38:14.376996  993024 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 22:38:14.377023  993024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 22:38:14.377082  993024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770968
	I0203 22:38:14.450495  993024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0203 22:38:14.479896  993024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33331 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/auto-770968/id_rsa Username:docker}
	I0203 22:38:14.553159  993024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 22:38:14.655981  993024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 22:38:14.742628  993024 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-770968" context rescaled to 1 replicas
	I0203 22:38:14.742674  993024 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 22:38:14.745012  993024 out.go:177] * Verifying Kubernetes components...
	I0203 22:38:13.393177  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:14.892284  979588 pod_ready.go:92] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.892315  979588 pod_ready.go:81] duration metric: took 12.516003387s waiting for pod "etcd-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.892325  979588 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.896342  979588 pod_ready.go:92] pod "kube-apiserver-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.896361  979588 pod_ready.go:81] duration metric: took 4.029948ms waiting for pod "kube-apiserver-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.896372  979588 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.900459  979588 pod_ready.go:92] pod "kube-controller-manager-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.900476  979588 pod_ready.go:81] duration metric: took 4.097977ms waiting for pod "kube-controller-manager-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.900488  979588 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6q8r8" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.904379  979588 pod_ready.go:92] pod "kube-proxy-6q8r8" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.904395  979588 pod_ready.go:81] duration metric: took 3.900784ms waiting for pod "kube-proxy-6q8r8" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.904404  979588 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.908308  979588 pod_ready.go:92] pod "kube-scheduler-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.908329  979588 pod_ready.go:81] duration metric: took 3.918339ms waiting for pod "kube-scheduler-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.908336  979588 pod_ready.go:38] duration metric: took 12.545672865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:14.908355  979588 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 22:38:14.915923  979588 ops.go:34] apiserver oom_adj: -16
	I0203 22:38:14.915946  979588 kubeadm.go:637] restartCluster took 55.680977837s
	I0203 22:38:14.915955  979588 kubeadm.go:403] StartCluster complete in 55.764379154s
	I0203 22:38:14.915973  979588 settings.go:142] acquiring lock: {Name:mkf92d82d8749aa11cbf8d7cc1c5c387b3a944f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:14.916045  979588 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15770-643340/kubeconfig
	I0203 22:38:14.917278  979588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/kubeconfig: {Name:mk7b0a220bbb894990ed89116f6b1e42d435549f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:14.917594  979588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0203 22:38:14.917805  979588 config.go:180] Loaded profile config "pause-868256": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:38:14.917754  979588 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0203 22:38:14.917856  979588 addons.go:65] Setting storage-provisioner=true in profile "pause-868256"
	I0203 22:38:14.917860  979588 addons.go:65] Setting default-storageclass=true in profile "pause-868256"
	I0203 22:38:14.917878  979588 addons.go:227] Setting addon storage-provisioner=true in "pause-868256"
	I0203 22:38:14.917884  979588 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-868256"
	W0203 22:38:14.917890  979588 addons.go:236] addon storage-provisioner should already be in state true
	I0203 22:38:14.917954  979588 host.go:66] Checking if "pause-868256" exists ...
	I0203 22:38:14.918184  979588 cli_runner.go:164] Run: docker container inspect pause-868256 --format={{.State.Status}}
	I0203 22:38:14.918353  979588 cli_runner.go:164] Run: docker container inspect pause-868256 --format={{.State.Status}}
	I0203 22:38:14.918447  979588 kapi.go:59] client config for pause-868256: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.crt", KeyFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.key", CAFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1891540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 22:38:14.921602  979588 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-868256" context rescaled to 1 replicas
	I0203 22:38:14.921645  979588 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 22:38:14.924729  979588 out.go:177] * Verifying Kubernetes components...
	I0203 22:38:14.926880  979588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 22:38:15.030594  979588 node_ready.go:35] waiting up to 6m0s for node "pause-868256" to be "Ready" ...
	I0203 22:38:15.030687  979588 start.go:892] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0203 22:38:15.043534  979588 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 22:38:14.747583  993024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 22:38:15.977908  993024 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.527368471s)
	I0203 22:38:15.977942  993024 start.go:919] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0203 22:38:16.053807  993024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.397786019s)
	I0203 22:38:16.053868  993024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.500683626s)
	I0203 22:38:16.056686  993024 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0203 22:38:16.054285  993024 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.306658637s)
	I0203 22:38:15.045509  979588 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 22:38:15.045533  979588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 22:38:15.045599  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:38:15.052408  979588 kapi.go:59] client config for pause-868256: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.crt", KeyFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.key", CAFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1891540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 22:38:15.055523  979588 addons.go:227] Setting addon default-storageclass=true in "pause-868256"
	W0203 22:38:15.055547  979588 addons.go:236] addon default-storageclass should already be in state true
	I0203 22:38:15.055578  979588 host.go:66] Checking if "pause-868256" exists ...
	I0203 22:38:15.056030  979588 cli_runner.go:164] Run: docker container inspect pause-868256 --format={{.State.Status}}
	I0203 22:38:15.095382  979588 node_ready.go:49] node "pause-868256" has status "Ready":"True"
	I0203 22:38:15.095405  979588 node_ready.go:38] duration metric: took 64.776768ms waiting for node "pause-868256" to be "Ready" ...
	I0203 22:38:15.095415  979588 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:15.170238  979588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/pause-868256/id_rsa Username:docker}
	I0203 22:38:15.180586  979588 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 22:38:15.180615  979588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 22:38:15.180676  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:38:15.281164  979588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/pause-868256/id_rsa Username:docker}
	I0203 22:38:15.282096  979588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 22:38:15.294779  979588 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:15.398048  979588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 22:38:15.690599  979588 pod_ready.go:92] pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:15.690623  979588 pod_ready.go:81] duration metric: took 395.806992ms waiting for pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:15.690637  979588 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.091573  979588 pod_ready.go:92] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:16.091597  979588 pod_ready.go:81] duration metric: took 400.951095ms waiting for pod "etcd-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.091610  979588 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.302870  979588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.020734682s)
	I0203 22:38:16.305381  979588 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0203 22:38:16.306921  979588 addons.go:492] enable addons completed in 1.389182055s: enabled=[storage-provisioner default-storageclass]
	I0203 22:38:13.013687 1004591 main.go:141] libmachine: Using SSH client type: native
	I0203 22:38:13.013894 1004591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33336 <nil> <nil>}
	I0203 22:38:13.013923 1004591 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 22:38:13.742305 1004591 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 22:38:12.937576440 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0203 22:38:13.742351 1004591 machine.go:91] provisioned docker machine in 2.273879376s
	I0203 22:38:13.742362 1004591 client.go:171] LocalClient.Create took 10.298549152s
	I0203 22:38:13.742384 1004591 start.go:167] duration metric: libmachine.API.Create for "kindnet-770968" took 10.29861439s
	I0203 22:38:13.742394 1004591 start.go:300] post-start starting for "kindnet-770968" (driver="docker")
	I0203 22:38:13.742406 1004591 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 22:38:13.742469 1004591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 22:38:13.742528 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:13.814394 1004591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa Username:docker}
	I0203 22:38:13.912829 1004591 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 22:38:13.916310 1004591 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0203 22:38:13.916344 1004591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0203 22:38:13.916358 1004591 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0203 22:38:13.916366 1004591 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0203 22:38:13.916378 1004591 filesync.go:126] Scanning /home/jenkins/minikube-integration/15770-643340/.minikube/addons for local assets ...
	I0203 22:38:13.916447 1004591 filesync.go:126] Scanning /home/jenkins/minikube-integration/15770-643340/.minikube/files for local assets ...
	I0203 22:38:13.916538 1004591 filesync.go:149] local asset: /home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/6500652.pem -> 6500652.pem in /etc/ssl/certs
	I0203 22:38:13.916645 1004591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 22:38:13.924601 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/6500652.pem --> /etc/ssl/certs/6500652.pem (1708 bytes)
	I0203 22:38:13.944178 1004591 start.go:303] post-start completed in 201.758507ms
	I0203 22:38:13.944627 1004591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-770968
	I0203 22:38:14.037008 1004591 profile.go:148] Saving config to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/config.json ...
	I0203 22:38:14.037300 1004591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 22:38:14.037360 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:14.130516 1004591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa Username:docker}
	I0203 22:38:14.226028 1004591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0203 22:38:14.230580 1004591 start.go:128] duration metric: createHost completed in 10.790100425s
	I0203 22:38:14.230608 1004591 start.go:83] releasing machines lock for "kindnet-770968", held for 10.790366482s
	I0203 22:38:14.230679 1004591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-770968
	I0203 22:38:14.342079 1004591 ssh_runner.go:195] Run: cat /version.json
	I0203 22:38:14.342157 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:14.342169 1004591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 22:38:14.342263 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:14.451187 1004591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa Username:docker}
	I0203 22:38:14.453252 1004591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa Username:docker}
	I0203 22:38:14.548593 1004591 ssh_runner.go:195] Run: systemctl --version
	I0203 22:38:14.582947 1004591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 22:38:14.588325 1004591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0203 22:38:14.609903 1004591 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0203 22:38:14.610060 1004591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 22:38:14.617084 1004591 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0203 22:38:14.630485 1004591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 22:38:14.655282 1004591 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0203 22:38:14.655320 1004591 start.go:483] detecting cgroup driver to use...
	I0203 22:38:14.655357 1004591 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 22:38:14.655499 1004591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 22:38:14.674360 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0203 22:38:14.683691 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 22:38:14.693014 1004591 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 22:38:14.693086 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 22:38:14.702223 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 22:38:14.710604 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 22:38:14.719403 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 22:38:14.727956 1004591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 22:38:14.740081 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 22:38:14.752368 1004591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 22:38:14.762909 1004591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 22:38:14.772602 1004591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 22:38:14.959370 1004591 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 22:38:15.084053 1004591 start.go:483] detecting cgroup driver to use...
	I0203 22:38:15.084115 1004591 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 22:38:15.084169 1004591 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 22:38:15.101931 1004591 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0203 22:38:15.102002 1004591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 22:38:15.126009 1004591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 22:38:15.152560 1004591 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 22:38:15.285995 1004591 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 22:38:15.401814 1004591 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 22:38:15.401852 1004591 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0203 22:38:15.426705 1004591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 22:38:15.550281 1004591 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 22:38:15.845337 1004591 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 22:38:15.952066 1004591 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 22:38:16.094060 1004591 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 22:38:16.217618 1004591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 22:38:16.340684 1004591 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 22:38:16.353656 1004591 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 22:38:16.353726 1004591 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 22:38:16.357274 1004591 start.go:551] Will wait 60s for crictl version
	I0203 22:38:16.357347 1004591 ssh_runner.go:195] Run: which crictl
	I0203 22:38:16.360768 1004591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 22:38:16.465833 1004591 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0203 22:38:16.465891 1004591 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 22:38:16.498201 1004591 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 22:38:16.059533  993024 addons.go:492] enable addons completed in 1.907506022s: enabled=[default-storageclass storage-provisioner]
	I0203 22:38:16.058234  993024 node_ready.go:35] waiting up to 15m0s for node "auto-770968" to be "Ready" ...
	I0203 22:38:16.062624  993024 node_ready.go:49] node "auto-770968" has status "Ready":"True"
	I0203 22:38:16.062647  993024 node_ready.go:38] duration metric: took 3.077522ms waiting for node "auto-770968" to be "Ready" ...
	I0203 22:38:16.062657  993024 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:16.069734  993024 pod_ready.go:78] waiting up to 15m0s for pod "coredns-787d4945fb-mgggf" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.534971 1004591 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0203 22:38:16.535057 1004591 cli_runner.go:164] Run: docker network inspect kindnet-770968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0203 22:38:16.620516 1004591 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0203 22:38:16.624113 1004591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 22:38:16.635603 1004591 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 22:38:16.635699 1004591 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 22:38:16.666395 1004591 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0203 22:38:16.666421 1004591 docker.go:560] Images already preloaded, skipping extraction
	I0203 22:38:16.666595 1004591 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 22:38:16.700040 1004591 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0203 22:38:16.700060 1004591 cache_images.go:84] Images are preloaded, skipping loading
	I0203 22:38:16.700107 1004591 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 22:38:16.803912 1004591 cni.go:84] Creating CNI manager for "kindnet"
	I0203 22:38:16.803946 1004591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0203 22:38:16.803968 1004591 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-770968 NodeName:kindnet-770968 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0203 22:38:16.804188 1004591 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kindnet-770968"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 22:38:16.804301 1004591 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kindnet-770968 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:kindnet-770968 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0203 22:38:16.804367 1004591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0203 22:38:16.816024 1004591 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 22:38:16.816088 1004591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 22:38:16.824791 1004591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (446 bytes)
	I0203 22:38:16.842350 1004591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 22:38:16.860002 1004591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2090 bytes)
	I0203 22:38:16.875820 1004591 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0203 22:38:16.879198 1004591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 22:38:16.890903 1004591 certs.go:56] Setting up /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968 for IP: 192.168.67.2
	I0203 22:38:16.890935 1004591 certs.go:186] acquiring lock for shared ca certs: {Name:mke70fce29a277706b809a1e09202f97eb3de8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:16.891085 1004591 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15770-643340/.minikube/ca.key
	I0203 22:38:16.891122 1004591 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15770-643340/.minikube/proxy-client-ca.key
	I0203 22:38:16.891196 1004591 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.key
	I0203 22:38:16.891216 1004591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt with IP's: []
	I0203 22:38:17.095005 1004591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt ...
	I0203 22:38:17.095035 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: {Name:mk3c77b8eff68bc1ccdd46c28205301dc7974378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:17.095187 1004591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.key ...
	I0203 22:38:17.095198 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.key: {Name:mkfefcf2627b205e82a91c5f0410ce518e0242b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:17.095266 1004591 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.key.c7fa3a9e
	I0203 22:38:17.095280 1004591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0203 22:38:17.270396 1004591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.crt.c7fa3a9e ...
	I0203 22:38:17.270429 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.crt.c7fa3a9e: {Name:mk22d64d141d8cb1755d91d94ffbd1df94ace2be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:17.270575 1004591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.key.c7fa3a9e ...
	I0203 22:38:17.270586 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.key.c7fa3a9e: {Name:mkd0dd7be4874ef70999fdcfaf854f142843fbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:17.270653 1004591 certs.go:333] copying /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.crt
	I0203 22:38:17.270707 1004591 certs.go:337] copying /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.key
	I0203 22:38:17.270750 1004591 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.key
	I0203 22:38:17.270763 1004591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.crt with IP's: []
	I0203 22:38:17.686011 1004591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.crt ...
	I0203 22:38:17.686042 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.crt: {Name:mkc6d2883d85d953c47cd456795fcd75eaaf3eda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:17.686234 1004591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.key ...
	I0203 22:38:17.686245 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.key: {Name:mk54b39abd9e8bdaf7fb30413a204e94320d7218 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:17.686401 1004591 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/650065.pem (1338 bytes)
	W0203 22:38:17.686438 1004591 certs.go:397] ignoring /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/650065_empty.pem, impossibly tiny 0 bytes
	I0203 22:38:17.686449 1004591 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca-key.pem (1679 bytes)
	I0203 22:38:17.686472 1004591 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem (1082 bytes)
	I0203 22:38:17.686494 1004591 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/cert.pem (1123 bytes)
	I0203 22:38:17.686518 1004591 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/key.pem (1679 bytes)
	I0203 22:38:17.686550 1004591 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/6500652.pem (1708 bytes)
	I0203 22:38:17.687197 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0203 22:38:17.707800 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 22:38:17.726274 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 22:38:17.745415 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 22:38:17.764416 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 22:38:17.783659 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0203 22:38:17.803874 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 22:38:17.823078 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0203 22:38:17.841596 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/certs/650065.pem --> /usr/share/ca-certificates/650065.pem (1338 bytes)
	I0203 22:38:17.864354 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/6500652.pem --> /usr/share/ca-certificates/6500652.pem (1708 bytes)
	I0203 22:38:17.887733 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 22:38:17.909826 1004591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 22:38:17.924689 1004591 ssh_runner.go:195] Run: openssl version
	I0203 22:38:17.929986 1004591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6500652.pem && ln -fs /usr/share/ca-certificates/6500652.pem /etc/ssl/certs/6500652.pem"
	I0203 22:38:17.938104 1004591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6500652.pem
	I0203 22:38:17.941617 1004591 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb  3 22:12 /usr/share/ca-certificates/6500652.pem
	I0203 22:38:17.941673 1004591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6500652.pem
	I0203 22:38:17.946718 1004591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6500652.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 22:38:17.954962 1004591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 22:38:17.962787 1004591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 22:38:17.970334 1004591 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb  3 22:08 /usr/share/ca-certificates/minikubeCA.pem
	I0203 22:38:17.970391 1004591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 22:38:17.975236 1004591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 22:38:17.982850 1004591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/650065.pem && ln -fs /usr/share/ca-certificates/650065.pem /etc/ssl/certs/650065.pem"
	I0203 22:38:17.991072 1004591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/650065.pem
	I0203 22:38:17.994888 1004591 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb  3 22:12 /usr/share/ca-certificates/650065.pem
	I0203 22:38:17.994957 1004591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/650065.pem
	I0203 22:38:18.000693 1004591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/650065.pem /etc/ssl/certs/51391683.0"
	I0203 22:38:18.009336 1004591 kubeadm.go:401] StartCluster: {Name:kindnet-770968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kindnet-770968 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 22:38:18.009495 1004591 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 22:38:18.033692 1004591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 22:38:18.042084 1004591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 22:38:18.050389 1004591 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0203 22:38:18.050464 1004591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 22:38:18.059003 1004591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 22:38:18.059054 1004591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0203 22:38:18.115346 1004591 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0203 22:38:18.115501 1004591 kubeadm.go:322] [preflight] Running pre-flight checks
	I0203 22:38:18.150202 1004591 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0203 22:38:18.150316 1004591 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1027-gcp
	I0203 22:38:18.150386 1004591 kubeadm.go:322] OS: Linux
	I0203 22:38:18.150470 1004591 kubeadm.go:322] CGROUPS_CPU: enabled
	I0203 22:38:18.150553 1004591 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0203 22:38:18.150635 1004591 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0203 22:38:18.150721 1004591 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0203 22:38:18.150791 1004591 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0203 22:38:18.150854 1004591 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0203 22:38:18.150916 1004591 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0203 22:38:18.150987 1004591 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0203 22:38:18.151061 1004591 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0203 22:38:18.220873 1004591 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 22:38:18.221009 1004591 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 22:38:18.221120 1004591 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 22:38:18.367754 1004591 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 22:38:16.490944  979588 pod_ready.go:92] pod "kube-apiserver-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:16.490967  979588 pod_ready.go:81] duration metric: took 399.350207ms waiting for pod "kube-apiserver-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.490977  979588 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.895568  979588 pod_ready.go:92] pod "kube-controller-manager-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:16.895592  979588 pod_ready.go:81] duration metric: took 404.606919ms waiting for pod "kube-controller-manager-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.895606  979588 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6q8r8" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:17.291424  979588 pod_ready.go:92] pod "kube-proxy-6q8r8" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:17.291453  979588 pod_ready.go:81] duration metric: took 395.838528ms waiting for pod "kube-proxy-6q8r8" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:17.291467  979588 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:17.690304  979588 pod_ready.go:92] pod "kube-scheduler-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:17.690326  979588 pod_ready.go:81] duration metric: took 398.850097ms waiting for pod "kube-scheduler-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:17.690333  979588 pod_ready.go:38] duration metric: took 2.59490922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:17.690353  979588 api_server.go:51] waiting for apiserver process to appear ...
	I0203 22:38:17.690389  979588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 22:38:17.700729  979588 api_server.go:71] duration metric: took 2.779046006s to wait for apiserver process to appear ...
	I0203 22:38:17.700770  979588 api_server.go:87] waiting for apiserver healthz status ...
	I0203 22:38:17.700785  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:38:17.705049  979588 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0203 22:38:17.706096  979588 api_server.go:140] control plane version: v1.26.1
	I0203 22:38:17.706119  979588 api_server.go:130] duration metric: took 5.342484ms to wait for apiserver health ...
	I0203 22:38:17.706130  979588 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 22:38:17.893912  979588 system_pods.go:59] 7 kube-system pods found
	I0203 22:38:17.893946  979588 system_pods.go:61] "coredns-787d4945fb-dd5vv" [aed13814-af10-4c1c-9548-20630079cd3c] Running
	I0203 22:38:17.893953  979588 system_pods.go:61] "etcd-pause-868256" [595c2af1-3166-4d59-969d-bc282f646ed5] Running
	I0203 22:38:17.893959  979588 system_pods.go:61] "kube-apiserver-pause-868256" [2a4d2f7d-025b-47a7-99c5-70079183e798] Running
	I0203 22:38:17.893966  979588 system_pods.go:61] "kube-controller-manager-pause-868256" [8f8a526f-dea5-4e08-8258-dd4e4654ae32] Running
	I0203 22:38:17.893972  979588 system_pods.go:61] "kube-proxy-6q8r8" [a9c6e5f1-fd98-4bc1-aae7-b0485f877616] Running
	I0203 22:38:17.893978  979588 system_pods.go:61] "kube-scheduler-pause-868256" [72bf6a79-cdaf-46bb-93fc-f8d402880694] Running
	I0203 22:38:17.893984  979588 system_pods.go:61] "storage-provisioner" [48da2fca-7198-449d-bebd-84e7ce3d61e0] Running
	I0203 22:38:17.893991  979588 system_pods.go:74] duration metric: took 187.854082ms to wait for pod list to return data ...
	I0203 22:38:17.894002  979588 default_sa.go:34] waiting for default service account to be created ...
	I0203 22:38:18.090160  979588 default_sa.go:45] found service account: "default"
	I0203 22:38:18.090187  979588 default_sa.go:55] duration metric: took 196.177872ms for default service account to be created ...
	I0203 22:38:18.090198  979588 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 22:38:18.293177  979588 system_pods.go:86] 7 kube-system pods found
	I0203 22:38:18.293208  979588 system_pods.go:89] "coredns-787d4945fb-dd5vv" [aed13814-af10-4c1c-9548-20630079cd3c] Running
	I0203 22:38:18.293216  979588 system_pods.go:89] "etcd-pause-868256" [595c2af1-3166-4d59-969d-bc282f646ed5] Running
	I0203 22:38:18.293224  979588 system_pods.go:89] "kube-apiserver-pause-868256" [2a4d2f7d-025b-47a7-99c5-70079183e798] Running
	I0203 22:38:18.293232  979588 system_pods.go:89] "kube-controller-manager-pause-868256" [8f8a526f-dea5-4e08-8258-dd4e4654ae32] Running
	I0203 22:38:18.293238  979588 system_pods.go:89] "kube-proxy-6q8r8" [a9c6e5f1-fd98-4bc1-aae7-b0485f877616] Running
	I0203 22:38:18.293244  979588 system_pods.go:89] "kube-scheduler-pause-868256" [72bf6a79-cdaf-46bb-93fc-f8d402880694] Running
	I0203 22:38:18.293251  979588 system_pods.go:89] "storage-provisioner" [48da2fca-7198-449d-bebd-84e7ce3d61e0] Running
	I0203 22:38:18.293262  979588 system_pods.go:126] duration metric: took 203.057207ms to wait for k8s-apps to be running ...
	I0203 22:38:18.293276  979588 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 22:38:18.293331  979588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 22:38:18.305047  979588 system_svc.go:56] duration metric: took 11.756989ms WaitForService to wait for kubelet.
	I0203 22:38:18.305082  979588 kubeadm.go:578] duration metric: took 3.383408222s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0203 22:38:18.305108  979588 node_conditions.go:102] verifying NodePressure condition ...
	I0203 22:38:18.491041  979588 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0203 22:38:18.491067  979588 node_conditions.go:123] node cpu capacity is 8
	I0203 22:38:18.491078  979588 node_conditions.go:105] duration metric: took 185.954067ms to run NodePressure ...
	I0203 22:38:18.491092  979588 start.go:228] waiting for startup goroutines ...
	I0203 22:38:18.491102  979588 start.go:233] waiting for cluster config update ...
	I0203 22:38:18.491115  979588 start.go:240] writing updated cluster config ...
	I0203 22:38:18.491444  979588 ssh_runner.go:195] Run: rm -f paused
	I0203 22:38:18.544355  979588 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0203 22:38:18.546936  979588 out.go:177] * Done! kubectl is now configured to use "pause-868256" cluster and "default" namespace by default
	I0203 22:38:18.079891  993024 pod_ready.go:92] pod "coredns-787d4945fb-mgggf" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:18.079917  993024 pod_ready.go:81] duration metric: took 2.010139572s waiting for pod "coredns-787d4945fb-mgggf" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.079929  993024 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.085179  993024 pod_ready.go:92] pod "etcd-auto-770968" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:18.085203  993024 pod_ready.go:81] duration metric: took 5.265511ms waiting for pod "etcd-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.085221  993024 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.090508  993024 pod_ready.go:92] pod "kube-apiserver-auto-770968" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:18.090537  993024 pod_ready.go:81] duration metric: took 5.307943ms waiting for pod "kube-apiserver-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.090549  993024 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.095978  993024 pod_ready.go:92] pod "kube-controller-manager-auto-770968" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:18.096004  993024 pod_ready.go:81] duration metric: took 5.445961ms waiting for pod "kube-controller-manager-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.096017  993024 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-9xzdr" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.101129  993024 pod_ready.go:92] pod "kube-proxy-9xzdr" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:18.101150  993024 pod_ready.go:81] duration metric: took 5.12452ms waiting for pod "kube-proxy-9xzdr" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.101168  993024 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.477216  993024 pod_ready.go:92] pod "kube-scheduler-auto-770968" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:18.477242  993024 pod_ready.go:81] duration metric: took 376.062648ms waiting for pod "kube-scheduler-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.477255  993024 pod_ready.go:38] duration metric: took 2.4145873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:18.477278  993024 api_server.go:51] waiting for apiserver process to appear ...
	I0203 22:38:18.477325  993024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 22:38:18.487665  993024 api_server.go:71] duration metric: took 3.744954873s to wait for apiserver process to appear ...
	I0203 22:38:18.487694  993024 api_server.go:87] waiting for apiserver healthz status ...
	I0203 22:38:18.487710  993024 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0203 22:38:18.492370  993024 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0203 22:38:18.493311  993024 api_server.go:140] control plane version: v1.26.1
	I0203 22:38:18.493332  993024 api_server.go:130] duration metric: took 5.632737ms to wait for apiserver health ...
	I0203 22:38:18.493342  993024 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 22:38:18.681678  993024 system_pods.go:59] 7 kube-system pods found
	I0203 22:38:18.681713  993024 system_pods.go:61] "coredns-787d4945fb-mgggf" [5ed178b8-b7da-4220-8918-3ae813bad2dc] Running
	I0203 22:38:18.681720  993024 system_pods.go:61] "etcd-auto-770968" [7c38d51e-b3d3-44f2-a341-cfa8ea7214a4] Running
	I0203 22:38:18.681727  993024 system_pods.go:61] "kube-apiserver-auto-770968" [ef37a8d2-175c-4870-af06-cca9da2213d3] Running
	I0203 22:38:18.681735  993024 system_pods.go:61] "kube-controller-manager-auto-770968" [edf88d0e-aab2-4513-b404-12cbb7f6b3fe] Running
	I0203 22:38:18.681742  993024 system_pods.go:61] "kube-proxy-9xzdr" [0422a10c-b5b6-40f8-9414-890e5edb3789] Running
	I0203 22:38:18.681749  993024 system_pods.go:61] "kube-scheduler-auto-770968" [57dcc5c1-699f-42bb-a2b1-913e211a730b] Running
	I0203 22:38:18.681753  993024 system_pods.go:61] "storage-provisioner" [db03fdab-2123-43e2-a481-7f60e9b8abd9] Running
	I0203 22:38:18.681758  993024 system_pods.go:74] duration metric: took 188.41201ms to wait for pod list to return data ...
	I0203 22:38:18.681772  993024 default_sa.go:34] waiting for default service account to be created ...
	I0203 22:38:18.879833  993024 default_sa.go:45] found service account: "default"
	I0203 22:38:18.879867  993024 default_sa.go:55] duration metric: took 198.086813ms for default service account to be created ...
	I0203 22:38:18.879878  993024 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 22:38:19.079562  993024 system_pods.go:86] 7 kube-system pods found
	I0203 22:38:19.079597  993024 system_pods.go:89] "coredns-787d4945fb-mgggf" [5ed178b8-b7da-4220-8918-3ae813bad2dc] Running
	I0203 22:38:19.079605  993024 system_pods.go:89] "etcd-auto-770968" [7c38d51e-b3d3-44f2-a341-cfa8ea7214a4] Running
	I0203 22:38:19.079611  993024 system_pods.go:89] "kube-apiserver-auto-770968" [ef37a8d2-175c-4870-af06-cca9da2213d3] Running
	I0203 22:38:19.079620  993024 system_pods.go:89] "kube-controller-manager-auto-770968" [edf88d0e-aab2-4513-b404-12cbb7f6b3fe] Running
	I0203 22:38:19.079628  993024 system_pods.go:89] "kube-proxy-9xzdr" [0422a10c-b5b6-40f8-9414-890e5edb3789] Running
	I0203 22:38:19.079634  993024 system_pods.go:89] "kube-scheduler-auto-770968" [57dcc5c1-699f-42bb-a2b1-913e211a730b] Running
	I0203 22:38:19.079640  993024 system_pods.go:89] "storage-provisioner" [db03fdab-2123-43e2-a481-7f60e9b8abd9] Running
	I0203 22:38:19.079648  993024 system_pods.go:126] duration metric: took 199.764172ms to wait for k8s-apps to be running ...
	I0203 22:38:19.079659  993024 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 22:38:19.079703  993024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 22:38:19.090019  993024 system_svc.go:56] duration metric: took 10.348879ms WaitForService to wait for kubelet.
	I0203 22:38:19.090048  993024 kubeadm.go:578] duration metric: took 4.347346913s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0203 22:38:19.090074  993024 node_conditions.go:102] verifying NodePressure condition ...
	I0203 22:38:19.277411  993024 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0203 22:38:19.277444  993024 node_conditions.go:123] node cpu capacity is 8
	I0203 22:38:19.277460  993024 node_conditions.go:105] duration metric: took 187.381288ms to run NodePressure ...
	I0203 22:38:19.277474  993024 start.go:228] waiting for startup goroutines ...
	I0203 22:38:19.277482  993024 start.go:233] waiting for cluster config update ...
	I0203 22:38:19.277495  993024 start.go:240] writing updated cluster config ...
	I0203 22:38:19.277799  993024 ssh_runner.go:195] Run: rm -f paused
	I0203 22:38:19.343358  993024 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0203 22:38:19.346801  993024 out.go:177] * Done! kubectl is now configured to use "auto-770968" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-03 22:36:24 UTC, end at Fri 2023-02-03 22:38:20 UTC. --
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.633049977Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.633076265Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.633081922Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.633249673Z" level=info msg="Loading containers: start."
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.790653955Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.847019416Z" level=info msg="Loading containers: done."
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.880068408Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.880170354Z" level=info msg="Daemon has completed initialization"
	Feb 03 22:37:16 pause-868256 systemd[1]: Started Docker Application Container Engine.
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.900326467Z" level=info msg="API listen on [::]:2376"
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.906076177Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 03 22:37:18 pause-868256 dockerd[4877]: time="2023-02-03T22:37:18.371758193Z" level=info msg="ignoring event" container=d0ec4fe6e67fa4f395d4d150b214490e9f1fbf20d8203653d6005c635ddc8628 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:39 pause-868256 dockerd[4877]: time="2023-02-03T22:37:39.545368649Z" level=info msg="ignoring event" container=b2e7a9f54a0419231060db56c69b54be5167fcfaddd3d1c6fa9c1b05363364fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.836014856Z" level=info msg="ignoring event" container=ec9da7ff44bd974a6c7738a0784f426043dcd53abacf1eb7797361c3d84a0b5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.837654540Z" level=info msg="ignoring event" container=22bb57467f2447dbae6d332677cf48e7d192f8ea7484eb6998ae4116c62183ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.841188556Z" level=info msg="ignoring event" container=65f599eb0eebff7f0068738e41c8ce5ac1384d19182283b24f7b5d74df1778a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.847483927Z" level=info msg="ignoring event" container=36c6f2ce6f7acad72c43e1117d4df8a4d65a22f3a8dc6d6b96a5728233e06ca4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.850472509Z" level=info msg="ignoring event" container=c638c5348fb7a44ef74083b0194d8a784bf238b23ec719ac4da429cb2233299b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.850515884Z" level=info msg="ignoring event" container=0364f8ab712b82038f0d44ed3b9a487c0a41355a9bf2c3871bc59cbe494bcd13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.963104917Z" level=info msg="ignoring event" container=dedaef110fcee9bafa404feb548f2906376d2c78796b863fcaa3eeb9dfae6f7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.963155127Z" level=info msg="ignoring event" container=e5363a998cd8779f1dfb21bfc557b173cfb95790a25259b71f05d6751e64d1e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.964922705Z" level=info msg="ignoring event" container=1a8a12bf42f57608542b94122ec09cf259f8a2147761e771499f9b85f78f6958 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.969399825Z" level=info msg="ignoring event" container=8f69d29f79237d9d49213cba679accdd983fcb185dbfc1c5307c4dc3bc005d57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:50 pause-868256 dockerd[4877]: time="2023-02-03T22:37:50.572502115Z" level=error msg="4a01e9080876caf1021d4aa3b4ba2a876f8bd761cd3605b526e58885ac293bcb cleanup: failed to delete container from containerd: no such container"
	Feb 03 22:37:54 pause-868256 dockerd[4877]: time="2023-02-03T22:37:54.773428186Z" level=info msg="ignoring event" container=900b5dd1be8ed33d38d7f7b4d0c08d1876dbee0c336ae09b5c912646dda06e91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	a511701d78f48       6e38f40d628db       4 seconds ago        Running             storage-provisioner       0                   797db591ad316
	47ced30f2dd57       5185b96f0becf       18 seconds ago       Running             coredns                   2                   d9f07975099b4
	eed4fb65b1355       46a6bb3c77ce0       19 seconds ago       Running             kube-proxy                3                   864eee88f4f2d
	6a83f36d42569       655493523f607       24 seconds ago       Running             kube-scheduler            3                   d4b8c3d690241
	318fc22205625       e9c08e11b07f6       24 seconds ago       Running             kube-controller-manager   3                   0ab7110849dd3
	35b3d2d96970f       deb04688c4a35       24 seconds ago       Running             kube-apiserver            3                   02be3ab2f0196
	a18ea735ec1fa       fce326961ae2d       24 seconds ago       Running             etcd                      3                   4566f9c32f9b6
	4a01e9080876c       deb04688c4a35       30 seconds ago       Created             kube-apiserver            2                   22bb57467f244
	dedaef110fcee       fce326961ae2d       42 seconds ago       Exited              etcd                      2                   36c6f2ce6f7ac
	65f599eb0eebf       e9c08e11b07f6       44 seconds ago       Exited              kube-controller-manager   2                   c638c5348fb7a
	1a8a12bf42f57       655493523f607       44 seconds ago       Exited              kube-scheduler            2                   ec9da7ff44bd9
	0364f8ab712b8       46a6bb3c77ce0       49 seconds ago       Exited              kube-proxy                2                   8f69d29f79237
	900b5dd1be8ed       5185b96f0becf       About a minute ago   Exited              coredns                   1                   e5363a998cd87
	
	* 
	* ==> coredns [47ced30f2dd5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:51220 - 31635 "HINFO IN 8398048868105058340.2766040748343229918. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01080628s
	
	* 
	* ==> coredns [900b5dd1be8e] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:35355 - 33426 "HINFO IN 5964515539779486170.4574324880819497146. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.085467132s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-868256
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-868256
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b839c677c13f941c936975b72b386dd12a345761
	                    minikube.k8s.io/name=pause-868256
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_03T22_36_49_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Feb 2023 22:36:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-868256
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Feb 2023 22:38:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Feb 2023 22:38:00 +0000   Fri, 03 Feb 2023 22:36:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Feb 2023 22:38:00 +0000   Fri, 03 Feb 2023 22:36:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Feb 2023 22:38:00 +0000   Fri, 03 Feb 2023 22:36:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Feb 2023 22:38:00 +0000   Fri, 03 Feb 2023 22:36:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-868256
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4c0b538bb934883b9b745615631a0cd
	  System UUID:                d96f8b80-73b2-4930-815b-fb582dc6c346
	  Boot ID:                    df076b79-1073-4433-b2e0-bb3b5cc417dd
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-dd5vv                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     78s
	  kube-system                 etcd-pause-868256                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         90s
	  kube-system                 kube-apiserver-pause-868256             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-controller-manager-pause-868256    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-6q8r8                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-pause-868256             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 77s                  kube-proxy       
	  Normal  Starting                 18s                  kube-proxy       
	  Normal  NodeHasSufficientPID     103s (x5 over 103s)  kubelet          Node pause-868256 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    103s (x5 over 103s)  kubelet          Node pause-868256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  103s (x5 over 103s)  kubelet          Node pause-868256 status is now: NodeHasSufficientMemory
	  Normal  Starting                 91s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  91s                  kubelet          Node pause-868256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s                  kubelet          Node pause-868256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s                  kubelet          Node pause-868256 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             91s                  kubelet          Node pause-868256 status is now: NodeNotReady
	  Normal  NodeReady                90s                  kubelet          Node pause-868256 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  90s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           78s                  node-controller  Node pause-868256 event: Registered Node pause-868256 in Controller
	  Normal  Starting                 25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node pause-868256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node pause-868256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node pause-868256 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                   node-controller  Node pause-868256 event: Registered Node pause-868256 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e 28 34 f1 31 b2 08 06
	[Feb 3 22:30] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e fa bc 13 11 9f 08 06
	[Feb 3 22:31] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 a4 4b 6c 89 08 06
	[Feb 3 22:33] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 19 20 d9 94 9b 08 06
	[  +0.321290] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 19 20 d9 94 9b 08 06
	[Feb 3 22:35] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 f1 6b d2 1d a5 08 06
	[  +0.597047] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a d1 3e c4 71 e1 08 06
	[Feb 3 22:36] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a 80 51 bb 28 22 08 06
	[Feb 3 22:37] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 70 85 71 50 cd 08 06
	[  +0.447318] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 70 85 71 50 cd 08 06
	[ +22.892103] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 17 95 87 4a 17 08 06
	[Feb 3 22:38] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 39 60 ce 8f 52 08 06
	[ +14.790896] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 02 fc e0 1f 1f 1f 08 06
	
	* 
	* ==> etcd [a18ea735ec1f] <==
	* {"level":"info","ts":"2023-02-03T22:37:57.072Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-03T22:37:57.072Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-03T22:37:57.073Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-03T22:37:57.073Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-03T22:37:57.079Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-03T22:37:57.079Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-03T22:37:57.079Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-03T22:37:57.079Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-02-03T22:37:57.079Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 5"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 5"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 5"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 5"}
	{"level":"info","ts":"2023-02-03T22:37:58.962Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-03T22:37:58.962Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-868256 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-03T22:37:58.962Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-03T22:37:58.962Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-03T22:37:58.962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-03T22:37:58.963Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-03T22:37:58.964Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"warn","ts":"2023-02-03T22:38:09.008Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.61668ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-868256\" ","response":"range_response_count:1 size:5460"}
	{"level":"info","ts":"2023-02-03T22:38:09.008Z","caller":"traceutil/trace.go:171","msg":"trace[1730616229] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-868256; range_end:; response_count:1; response_revision:434; }","duration":"120.785836ms","start":"2023-02-03T22:38:08.887Z","end":"2023-02-03T22:38:09.008Z","steps":["trace[1730616229] 'range keys from in-memory index tree'  (duration: 120.416427ms)"],"step_count":1}
	
	* 
	* ==> etcd [dedaef110fce] <==
	* {"level":"info","ts":"2023-02-03T22:37:39.760Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-03T22:37:39.760Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-02-03T22:37:39.760Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-02-03T22:37:39.760Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-03T22:37:39.760Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 3"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 3"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:40.950Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-868256 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-03T22:37:40.950Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-03T22:37:40.951Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-03T22:37:40.951Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-03T22:37:40.952Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-03T22:37:40.952Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-03T22:37:40.953Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2023-02-03T22:37:49.753Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-02-03T22:37:49.753Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-868256","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"info","ts":"2023-02-03T22:37:49.760Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2023-02-03T22:37:49.763Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-02-03T22:37:49.763Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-02-03T22:37:49.763Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-868256","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	* 
	* ==> kernel <==
	*  22:38:20 up  2:20,  0 users,  load average: 6.87, 5.39, 3.30
	Linux pause-868256 5.15.0-1027-gcp #34~20.04.1-Ubuntu SMP Mon Jan 9 18:40:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [35b3d2d96970] <==
	* I0203 22:38:00.597506       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0203 22:38:00.597692       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0203 22:38:00.597702       1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
	I0203 22:38:00.598032       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 22:38:00.598191       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0203 22:38:00.636745       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0203 22:38:00.649685       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0203 22:38:00.733438       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0203 22:38:00.735823       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0203 22:38:00.736596       1 cache.go:39] Caches are synced for autoregister controller
	I0203 22:38:00.736719       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0203 22:38:00.737558       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0203 22:38:00.737642       1 shared_informer.go:280] Caches are synced for configmaps
	I0203 22:38:00.737773       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0203 22:38:00.833416       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0203 22:38:00.833451       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0203 22:38:01.349910       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0203 22:38:01.605435       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0203 22:38:02.254494       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 22:38:02.267762       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 22:38:02.303272       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 22:38:02.339559       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0203 22:38:02.347364       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0203 22:38:13.407639       1 controller.go:615] quota admission added evaluator for: endpoints
	I0203 22:38:13.417865       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [4a01e9080876] <==
	* 
	* 
	* ==> kube-controller-manager [318fc2220562] <==
	* I0203 22:38:13.384395       1 shared_informer.go:280] Caches are synced for TTL after finished
	I0203 22:38:13.384478       1 shared_informer.go:280] Caches are synced for job
	I0203 22:38:13.384527       1 shared_informer.go:280] Caches are synced for ReplicationController
	I0203 22:38:13.384646       1 shared_informer.go:280] Caches are synced for GC
	I0203 22:38:13.385797       1 shared_informer.go:280] Caches are synced for ReplicaSet
	I0203 22:38:13.392056       1 shared_informer.go:280] Caches are synced for HPA
	I0203 22:38:13.395272       1 shared_informer.go:280] Caches are synced for node
	I0203 22:38:13.395283       1 shared_informer.go:280] Caches are synced for disruption
	I0203 22:38:13.395357       1 range_allocator.go:167] Sending events to api server.
	I0203 22:38:13.395394       1 range_allocator.go:171] Starting range CIDR allocator
	I0203 22:38:13.395399       1 shared_informer.go:273] Waiting for caches to sync for cidrallocator
	I0203 22:38:13.395409       1 shared_informer.go:280] Caches are synced for cidrallocator
	I0203 22:38:13.397523       1 shared_informer.go:280] Caches are synced for bootstrap_signer
	I0203 22:38:13.398975       1 shared_informer.go:280] Caches are synced for endpoint
	I0203 22:38:13.401280       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0203 22:38:13.403558       1 shared_informer.go:280] Caches are synced for TTL
	I0203 22:38:13.405731       1 shared_informer.go:280] Caches are synced for daemon sets
	I0203 22:38:13.409201       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0203 22:38:13.481937       1 shared_informer.go:280] Caches are synced for cronjob
	I0203 22:38:13.514605       1 shared_informer.go:280] Caches are synced for resource quota
	I0203 22:38:13.544905       1 shared_informer.go:280] Caches are synced for resource quota
	I0203 22:38:13.598433       1 shared_informer.go:280] Caches are synced for attach detach
	I0203 22:38:13.936046       1 shared_informer.go:280] Caches are synced for garbage collector
	I0203 22:38:13.966331       1 shared_informer.go:280] Caches are synced for garbage collector
	I0203 22:38:13.966366       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [65f599eb0eeb] <==
	* I0203 22:37:36.832518       1 serving.go:348] Generated self-signed cert in-memory
	I0203 22:37:37.438220       1 controllermanager.go:182] Version: v1.26.1
	I0203 22:37:37.438270       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 22:37:37.439973       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0203 22:37:37.440713       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0203 22:37:37.440840       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 22:37:37.441000       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-proxy [0364f8ab712b] <==
	* E0203 22:37:40.531428       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-868256": dial tcp 192.168.85.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.85.2:42262->192.168.85.2:8443: read: connection reset by peer
	E0203 22:37:41.597562       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-868256": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:43.808745       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-868256": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.159722       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-868256": dial tcp 192.168.85.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [eed4fb65b135] <==
	* I0203 22:38:01.749043       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0203 22:38:01.749147       1 server_others.go:109] "Detected node IP" address="192.168.85.2"
	I0203 22:38:01.749173       1 server_others.go:535] "Using iptables proxy"
	I0203 22:38:01.781102       1 server_others.go:176] "Using iptables Proxier"
	I0203 22:38:01.781161       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0203 22:38:01.781174       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0203 22:38:01.781197       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0203 22:38:01.781229       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0203 22:38:01.781627       1 server.go:655] "Version info" version="v1.26.1"
	I0203 22:38:01.781641       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 22:38:01.786662       1 config.go:317] "Starting service config controller"
	I0203 22:38:01.786698       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0203 22:38:01.786804       1 config.go:226] "Starting endpoint slice config controller"
	I0203 22:38:01.786821       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0203 22:38:01.787490       1 config.go:444] "Starting node config controller"
	I0203 22:38:01.787522       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0203 22:38:01.887788       1 shared_informer.go:280] Caches are synced for node config
	I0203 22:38:01.887827       1 shared_informer.go:280] Caches are synced for service config
	I0203 22:38:01.887861       1 shared_informer.go:280] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [1a8a12bf42f5] <==
	* W0203 22:37:48.418053       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.85.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.418102       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.85.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:48.536848       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.85.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.536886       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.85.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:48.559639       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.85.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.559684       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.85.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:48.571229       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.85.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.571271       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.85.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:48.651308       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.85.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.651358       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.85.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:48.759133       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.85.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.759175       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.85.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:49.570456       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.85.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:49.570507       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.85.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:49.596093       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.85.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:49.596144       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.85.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:49.733563       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.85.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:49.733606       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.85.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:49.746697       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.85.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:49.746772       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.85.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:49.753765       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0203 22:37:49.753860       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0203 22:37:49.753911       1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 22:37:49.754317       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0203 22:37:49.754357       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [6a83f36d4256] <==
	* I0203 22:37:57.539231       1 serving.go:348] Generated self-signed cert in-memory
	W0203 22:38:00.642481       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0203 22:38:00.642517       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0203 22:38:00.642529       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0203 22:38:00.642539       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 22:38:00.735367       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0203 22:38:00.735410       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 22:38:00.736959       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0203 22:38:00.737207       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 22:38:00.737296       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 22:38:00.737365       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0203 22:38:00.838117       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-03 22:36:24 UTC, end at Fri 2023-02-03 22:38:20 UTC. --
	Feb 03 22:37:56 pause-868256 kubelet[7090]: W0203 22:37:56.835486    7090 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 03 22:37:56 pause-868256 kubelet[7090]: E0203 22:37:56.835582    7090 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 03 22:37:57 pause-868256 kubelet[7090]: I0203 22:37:57.488927    7090 kubelet_node_status.go:70] "Attempting to register node" node="pause-868256"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.756726    7090 kubelet_node_status.go:108] "Node was previously registered" node="pause-868256"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.756844    7090 kubelet_node_status.go:73] "Successfully registered node" node="pause-868256"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.757970    7090 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.833730    7090 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.882541    7090 apiserver.go:52] "Watching apiserver"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.885281    7090 topology_manager.go:210] "Topology Admit Handler"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.885740    7090 topology_manager.go:210] "Topology Admit Handler"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.893472    7090 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934056    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aed13814-af10-4c1c-9548-20630079cd3c-config-volume\") pod \"coredns-787d4945fb-dd5vv\" (UID: \"aed13814-af10-4c1c-9548-20630079cd3c\") " pod="kube-system/coredns-787d4945fb-dd5vv"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934120    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a9c6e5f1-fd98-4bc1-aae7-b0485f877616-kube-proxy\") pod \"kube-proxy-6q8r8\" (UID: \"a9c6e5f1-fd98-4bc1-aae7-b0485f877616\") " pod="kube-system/kube-proxy-6q8r8"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934170    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd4tx\" (UniqueName: \"kubernetes.io/projected/a9c6e5f1-fd98-4bc1-aae7-b0485f877616-kube-api-access-xd4tx\") pod \"kube-proxy-6q8r8\" (UID: \"a9c6e5f1-fd98-4bc1-aae7-b0485f877616\") " pod="kube-system/kube-proxy-6q8r8"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934250    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txkvf\" (UniqueName: \"kubernetes.io/projected/aed13814-af10-4c1c-9548-20630079cd3c-kube-api-access-txkvf\") pod \"coredns-787d4945fb-dd5vv\" (UID: \"aed13814-af10-4c1c-9548-20630079cd3c\") " pod="kube-system/coredns-787d4945fb-dd5vv"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934286    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9c6e5f1-fd98-4bc1-aae7-b0485f877616-lib-modules\") pod \"kube-proxy-6q8r8\" (UID: \"a9c6e5f1-fd98-4bc1-aae7-b0485f877616\") " pod="kube-system/kube-proxy-6q8r8"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934317    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9c6e5f1-fd98-4bc1-aae7-b0485f877616-xtables-lock\") pod \"kube-proxy-6q8r8\" (UID: \"a9c6e5f1-fd98-4bc1-aae7-b0485f877616\") " pod="kube-system/kube-proxy-6q8r8"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934342    7090 reconciler.go:41] "Reconciler: start to sync state"
	Feb 03 22:38:01 pause-868256 kubelet[7090]: I0203 22:38:01.486360    7090 scope.go:115] "RemoveContainer" containerID="0364f8ab712b82038f0d44ed3b9a487c0a41355a9bf2c3871bc59cbe494bcd13"
	Feb 03 22:38:03 pause-868256 kubelet[7090]: I0203 22:38:03.332601    7090 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Feb 03 22:38:06 pause-868256 kubelet[7090]: I0203 22:38:06.669092    7090 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Feb 03 22:38:16 pause-868256 kubelet[7090]: I0203 22:38:16.302972    7090 topology_manager.go:210] "Topology Admit Handler"
	Feb 03 22:38:16 pause-868256 kubelet[7090]: I0203 22:38:16.447036    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm6bb\" (UniqueName: \"kubernetes.io/projected/48da2fca-7198-449d-bebd-84e7ce3d61e0-kube-api-access-pm6bb\") pod \"storage-provisioner\" (UID: \"48da2fca-7198-449d-bebd-84e7ce3d61e0\") " pod="kube-system/storage-provisioner"
	Feb 03 22:38:16 pause-868256 kubelet[7090]: I0203 22:38:16.447113    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/48da2fca-7198-449d-bebd-84e7ce3d61e0-tmp\") pod \"storage-provisioner\" (UID: \"48da2fca-7198-449d-bebd-84e7ce3d61e0\") " pod="kube-system/storage-provisioner"
	Feb 03 22:38:17 pause-868256 kubelet[7090]: I0203 22:38:17.454077    7090 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.454027366 pod.CreationTimestamp="2023-02-03 22:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-03 22:38:17.453840185 +0000 UTC m=+21.696995514" watchObservedRunningTime="2023-02-03 22:38:17.454027366 +0000 UTC m=+21.697182705"
	
	* 
	* ==> storage-provisioner [a511701d78f4] <==
	* I0203 22:38:16.936999       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0203 22:38:16.946697       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0203 22:38:16.946752       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0203 22:38:16.955065       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0203 22:38:16.955238       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-868256_4a0635b1-4204-4ec5-8fe3-0ffa67459c40!
	I0203 22:38:16.955692       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"02e3f377-c0af-4b3c-adb7-b97e0409d467", APIVersion:"v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-868256_4a0635b1-4204-4ec5-8fe3-0ffa67459c40 became leader
	I0203 22:38:17.055517       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-868256_4a0635b1-4204-4ec5-8fe3-0ffa67459c40!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-868256 -n pause-868256
helpers_test.go:261: (dbg) Run:  kubectl --context pause-868256 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-868256
helpers_test.go:235: (dbg) docker inspect pause-868256:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23",
	        "Created": "2023-02-03T22:36:23.260201893Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 957973,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T22:36:23.761992846Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f59734230331367fdba579a7224885a8ca1b2b3a1b0a3db04074b5e8b329b90",
	        "ResolvConfPath": "/var/lib/docker/containers/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23/hosts",
	        "LogPath": "/var/lib/docker/containers/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23/a4c14c9470c0f6e967b9c703d5bca7e95dffdb47bc6e0f6b96b2de6aaafdee23-json.log",
	        "Name": "/pause-868256",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-868256:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-868256",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b7dd34239fcd3c6a2da0e0eb5ce926abfebc6db69c11302ab54c4ddcb876819-init/diff:/var/lib/docker/overlay2/0b475e32bad1f0dfced579ecb7b5cc72250aea7cec59e31a4743cd3a0d99e940/diff:/var/lib/docker/overlay2/aa2fe43966fc90171971fa0cf45ed489397176948a5d7e5c488c0895ea14fcf9/diff:/var/lib/docker/overlay2/d486d5af4f47c81a76d06ab38edcdd6e7c4c6d44bfccdebbbb9b1e69d39d2b05/diff:/var/lib/docker/overlay2/326412ac9c29a61ae48e2ee6d8d6f87ec6a4fd1bd6016dffb2811bfbfba591f9/diff:/var/lib/docker/overlay2/78f53b59df4fb8a2a788513fbe42773235fbfeeee25597b9ed08ab74e82151c2/diff:/var/lib/docker/overlay2/dd8122f0f83d412f78fbddee374294a4b80687e5536b80215002695f569198f2/diff:/var/lib/docker/overlay2/cc67dde78b4c1492ebd02fe71402ab41b661ee204fbde6d210cf8509387b098f/diff:/var/lib/docker/overlay2/a2b4916ad1fd3586e65047fb83df5d41ebcab71ac2ffa08b0e036e4678cb710a/diff:/var/lib/docker/overlay2/034739ad6486ba53fbfe3b3b421d15c6f3a0dd8fde3a43b07e103abff096d4f1/diff:/var/lib/docker/overlay2/307eda
d9ab61a3663c90810503decfdc670fe1869242a7f31075b6e59d76541a/diff:/var/lib/docker/overlay2/9c55defe4ce8df151985a8f224f3ed60b3859894f0e563ad67f2f4d1732230be/diff:/var/lib/docker/overlay2/e943e6cdbde9389f9a98c170180fedff4c2a9f95d9932705ca166be2d938da89/diff:/var/lib/docker/overlay2/cfdded024a919d0fb407d0de88be58a616371fce4c0976bd8002f580d767b842/diff:/var/lib/docker/overlay2/5d723f8d0c80d5508336518cd9b29f89acf16286d8ccdfb78feb1e37fe0bf064/diff:/var/lib/docker/overlay2/c47949bf11583f6ebcbf720cff56c46f781041344baa330c0bc5c1b61dad2f55/diff:/var/lib/docker/overlay2/27ad1f98760d8a67bd303c2b5611897e161a80beb6c7ed104208b48dd7b91379/diff:/var/lib/docker/overlay2/a0e957e1d2331cbc92f5a999b543942f2031b84ea47f403a499e7bef91d65899/diff:/var/lib/docker/overlay2/a229667103290aefe4a619724ab1234e77b9db8874253aa22c86042b8892c830/diff:/var/lib/docker/overlay2/467130c8e8a7564760c18a6fe07094da15434d5f1e474416b9572afe4b482f35/diff:/var/lib/docker/overlay2/cd5ca47a80e9064bab4601161848e63acf588fa9229e1174ab542acb88a97b16/diff:/var/lib/d
ocker/overlay2/a797536bd93f660222d6488b3f3ccb7d093128ad2c053b2e2be52eef7031bea6/diff:/var/lib/docker/overlay2/248250b521a0dd8701f96cf524c3c3739a1eff512d14fb30c74827361b312b32/diff:/var/lib/docker/overlay2/062e2ddeefb5ad4346bda8722a899aa52dda719d4249498404cb2d4892536de4/diff:/var/lib/docker/overlay2/fc997cd730a7dd26b34f6e703d278a224680a059dacf504900111dc9a025bbf0/diff:/var/lib/docker/overlay2/f577bb4339434ce3c9ded35d7cae363bc0f8211505f076fabb90fba761421598/diff:/var/lib/docker/overlay2/e8ac8d4860f647d09162c5b7a3176ddd3c2e474bbccd68be7c16766a7fd23cc3/diff:/var/lib/docker/overlay2/83c501c19fcfb1a35a817eaeeb945d275930e39c796dfc74152c43fdde79ab84/diff:/var/lib/docker/overlay2/0e920c20ffbb5e7feb23e6614ca1f2087335c096eb0309328a0689561d3a34b7/diff:/var/lib/docker/overlay2/fddb0961123e581f39614f85a12371d378053c880449edc8ef02b7b59d37acbd/diff:/var/lib/docker/overlay2/79a3dd2dc2deaed4119301832c81086def768bb1f385f355d4040d07da72699c/diff:/var/lib/docker/overlay2/d8ab98e1745fd7d47f1072f953123e3f453d00a4142308cac37c683e7e2
15755/diff:/var/lib/docker/overlay2/cf689ce035c88cc3cd979840cd72f78a9a4dcc62b2908837d83e705d0188a595/diff:/var/lib/docker/overlay2/f3ef7125ac2d8a6c9d2b633eb3fb34158b96a4639a2ef3d6d3bed8c91b5a6f2f/diff:/var/lib/docker/overlay2/e4e0e186cf2cf07dae99d67e45b1e480bbf4af91d131348c6d2124f0b201a650/diff:/var/lib/docker/overlay2/a50f9577818b2898c6d148599e38b6a88d0d80085a584bba96928c73f334cbcd/diff:/var/lib/docker/overlay2/2efc2fb2ee969b3eb5d1bde8184f7a96ea316eda6b6a74665936973ea3f3bd6b/diff:/var/lib/docker/overlay2/76cfcade4e4ca9badc64f6ada01efa5198447e393a87405de24b1418986c5e84/diff:/var/lib/docker/overlay2/503b12ed217c06e41cae8cd4644f7e70792be89545abf400682521433255eb6c/diff:/var/lib/docker/overlay2/f051a3728d7742609a1e79f10ecf540a169426957992d56f8c95311390abf08c/diff:/var/lib/docker/overlay2/dd655e28a7bca3a64c71fc29b401269c2f81b35cfcd5cdf0174304407eaf4433/diff:/var/lib/docker/overlay2/3b2197e4d79d675e680efd7a515dbd55aeac009a711fd6f0c3986eaa894c0e9d/diff:/var/lib/docker/overlay2/10aec7220005d1e9a6082e19fec2237d778d9d
752c48da1ce707c0001e09f158/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b7dd34239fcd3c6a2da0e0eb5ce926abfebc6db69c11302ab54c4ddcb876819/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b7dd34239fcd3c6a2da0e0eb5ce926abfebc6db69c11302ab54c4ddcb876819/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b7dd34239fcd3c6a2da0e0eb5ce926abfebc6db69c11302ab54c4ddcb876819/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-868256",
	                "Source": "/var/lib/docker/volumes/pause-868256/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-868256",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-868256",
	                "name.minikube.sigs.k8s.io": "pause-868256",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8a59c4ee03c9dc235af6991d7b1f1f46e8572118408bb5130b803ba9ad30e3f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33311"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33310"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33307"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33309"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33308"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a8a59c4ee03c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-868256": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a4c14c9470c0",
	                        "pause-868256"
	                    ],
	                    "NetworkID": "561f7143c42d387f8e4c8725c3705eedda5f89c460cd4b2e7e8dd55f7e009901",
	                    "EndpointID": "baf703b9428a8be97ace56ce7385a5313d5ed205e25bd4c2c491adcf4f056294",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-868256 -n pause-868256
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-868256 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-868256 logs -n 25: (1.736707844s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-env-432494           | force-systemd-env-432494  | jenkins | v1.29.0 | 03 Feb 23 22:35 UTC | 03 Feb 23 22:36 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-955330          | kubernetes-upgrade-955330 | jenkins | v1.29.0 | 03 Feb 23 22:35 UTC | 03 Feb 23 22:36 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-804939             | force-systemd-flag-804939 | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:36 UTC |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-804939          | force-systemd-flag-804939 | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:36 UTC |
	| start   | -p pause-868256 --memory=2048         | pause-868256              | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:37 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker            |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-955330          | kubernetes-upgrade-955330 | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-955330          | kubernetes-upgrade-955330 | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:36 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-432494              | force-systemd-env-432494  | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:36 UTC |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-432494           | force-systemd-env-432494  | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:36 UTC |
	| start   | -p cert-expiration-012867             | cert-expiration-012867    | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:37 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-955330          | kubernetes-upgrade-955330 | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:36 UTC |
	| start   | -p docker-flags-636731                | docker-flags-636731       | jenkins | v1.29.0 | 03 Feb 23 22:36 UTC | 03 Feb 23 22:37 UTC |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p pause-868256                       | pause-868256              | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:38 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p running-upgrade-086031             | running-upgrade-086031    | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | docker-flags-636731 ssh               | docker-flags-636731       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-636731 ssh               | docker-flags-636731       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-636731                | docker-flags-636731       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	| delete  | -p running-upgrade-086031             | running-upgrade-086031    | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	| start   | -p cert-options-145838                | cert-options-145838       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| start   | -p auto-770968 --memory=3072          | auto-770968               | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:38 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | cert-options-145838 ssh               | cert-options-145838       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-145838 -- sudo        | cert-options-145838       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:37 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-145838                | cert-options-145838       | jenkins | v1.29.0 | 03 Feb 23 22:37 UTC | 03 Feb 23 22:38 UTC |
	| start   | -p kindnet-770968                     | kindnet-770968            | jenkins | v1.29.0 | 03 Feb 23 22:38 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker         |                           |         |         |                     |                     |
	|         | --container-runtime=docker            |                           |         |         |                     |                     |
	| ssh     | -p auto-770968 pgrep -a               | auto-770968               | jenkins | v1.29.0 | 03 Feb 23 22:38 UTC | 03 Feb 23 22:38 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/03 22:38:02
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 22:38:02.961492 1004591 out.go:296] Setting OutFile to fd 1 ...
	I0203 22:38:02.961596 1004591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:38:02.961603 1004591 out.go:309] Setting ErrFile to fd 2...
	I0203 22:38:02.961608 1004591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:38:02.961719 1004591 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15770-643340/.minikube/bin
	I0203 22:38:02.962400 1004591 out.go:303] Setting JSON to false
	I0203 22:38:02.964255 1004591 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8433,"bootTime":1675455450,"procs":952,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 22:38:02.964369 1004591 start.go:135] virtualization: kvm guest
	I0203 22:38:02.967570 1004591 out.go:177] * [kindnet-770968] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 22:38:02.969472 1004591 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 22:38:02.969414 1004591 notify.go:220] Checking for updates...
	I0203 22:38:02.971069 1004591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 22:38:02.972916 1004591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	I0203 22:38:02.974648 1004591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	I0203 22:38:02.976527 1004591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 22:38:02.978300 1004591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 22:38:02.980613 1004591 config.go:180] Loaded profile config "auto-770968": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:38:02.980753 1004591 config.go:180] Loaded profile config "cert-expiration-012867": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:38:02.980876 1004591 config.go:180] Loaded profile config "pause-868256": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:38:02.980938 1004591 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 22:38:03.072418 1004591 docker.go:141] docker version: linux-23.0.0:Docker Engine - Community
	I0203 22:38:03.072528 1004591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 22:38:03.211307 1004591 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:47 SystemTime:2023-02-03 22:38:03.201040188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:31aa4358a36870b21a992d3ad2bef29e1d693bec Expected:31aa4358a36870b21a992d3ad2bef29e1d693bec} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 22:38:03.211421 1004591 docker.go:282] overlay module found
	I0203 22:38:03.214267 1004591 out.go:177] * Using the docker driver based on user configuration
	I0203 22:38:03.215944 1004591 start.go:296] selected driver: docker
	I0203 22:38:03.215978 1004591 start.go:857] validating driver "docker" against <nil>
	I0203 22:38:03.216006 1004591 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 22:38:03.216985 1004591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 22:38:03.351643 1004591 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:47 SystemTime:2023-02-03 22:38:03.342284737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:31aa4358a36870b21a992d3ad2bef29e1d693bec Expected:31aa4358a36870b21a992d3ad2bef29e1d693bec} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 22:38:03.351770 1004591 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0203 22:38:03.351973 1004591 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 22:38:03.354522 1004591 out.go:177] * Using Docker driver with root privileges
	I0203 22:38:03.357363 1004591 cni.go:84] Creating CNI manager for "kindnet"
	I0203 22:38:03.357396 1004591 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0203 22:38:03.357410 1004591 start_flags.go:319] config:
	{Name:kindnet-770968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kindnet-770968 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 22:38:03.359508 1004591 out.go:177] * Starting control plane node kindnet-770968 in cluster kindnet-770968
	I0203 22:38:03.361539 1004591 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 22:38:03.363451 1004591 out.go:177] * Pulling base image ...
	I0203 22:38:03.365291 1004591 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 22:38:03.365358 1004591 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15770-643340/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0203 22:38:03.365373 1004591 cache.go:57] Caching tarball of preloaded images
	I0203 22:38:03.365413 1004591 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 22:38:03.365489 1004591 preload.go:174] Found /home/jenkins/minikube-integration/15770-643340/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 22:38:03.365505 1004591 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0203 22:38:03.365725 1004591 profile.go:148] Saving config to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/config.json ...
	I0203 22:38:03.365764 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/config.json: {Name:mk5f9111854d4b577e0eaace8a28dd6870591f7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:03.440004 1004591 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon, skipping pull
	I0203 22:38:03.440032 1004591 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 exists in daemon, skipping load
	I0203 22:38:03.440056 1004591 cache.go:193] Successfully downloaded all kic artifacts
	I0203 22:38:03.440092 1004591 start.go:364] acquiring machines lock for kindnet-770968: {Name:mk4aa1a98cb1fcf6397c55c385c6f84ed8f4ce0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 22:38:03.440226 1004591 start.go:368] acquired machines lock for "kindnet-770968" in 111.78µs
	I0203 22:38:03.440262 1004591 start.go:93] Provisioning new machine with config: &{Name:kindnet-770968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kindnet-770968 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 22:38:03.440461 1004591 start.go:125] createHost starting for "" (driver="docker")
	I0203 22:38:02.064102  979588 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0203 22:38:02.074550  979588 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0203 22:38:02.091209  979588 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 22:38:02.102275  979588 system_pods.go:59] 6 kube-system pods found
	I0203 22:38:02.102317  979588 system_pods.go:61] "coredns-787d4945fb-dd5vv" [aed13814-af10-4c1c-9548-20630079cd3c] Running
	I0203 22:38:02.102329  979588 system_pods.go:61] "etcd-pause-868256" [595c2af1-3166-4d59-969d-bc282f646ed5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 22:38:02.102339  979588 system_pods.go:61] "kube-apiserver-pause-868256" [2a4d2f7d-025b-47a7-99c5-70079183e798] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0203 22:38:02.102349  979588 system_pods.go:61] "kube-controller-manager-pause-868256" [8f8a526f-dea5-4e08-8258-dd4e4654ae32] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 22:38:02.102366  979588 system_pods.go:61] "kube-proxy-6q8r8" [a9c6e5f1-fd98-4bc1-aae7-b0485f877616] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0203 22:38:02.102378  979588 system_pods.go:61] "kube-scheduler-pause-868256" [72bf6a79-cdaf-46bb-93fc-f8d402880694] Running
	I0203 22:38:02.102386  979588 system_pods.go:74] duration metric: took 11.150861ms to wait for pod list to return data ...
	I0203 22:38:02.102398  979588 node_conditions.go:102] verifying NodePressure condition ...
	I0203 22:38:02.106217  979588 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0203 22:38:02.106247  979588 node_conditions.go:123] node cpu capacity is 8
	I0203 22:38:02.106260  979588 node_conditions.go:105] duration metric: took 3.856582ms to run NodePressure ...
	I0203 22:38:02.106283  979588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 22:38:02.357304  979588 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0203 22:38:02.362615  979588 kubeadm.go:784] kubelet initialised
	I0203 22:38:02.362643  979588 kubeadm.go:785] duration metric: took 5.310734ms waiting for restarted kubelet to initialise ...
	I0203 22:38:02.362654  979588 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:02.370043  979588 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:02.376254  979588 pod_ready.go:92] pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:02.376290  979588 pod_ready.go:81] duration metric: took 6.215526ms waiting for pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:02.376304  979588 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:04.391744  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:06.392746  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:01.869330  993024 ops.go:34] apiserver oom_adj: -16
	I0203 22:38:01.869359  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:02.476387  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:02.976871  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:03.476460  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:03.976437  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:04.477266  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:04.977281  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:05.477172  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:05.976833  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:06.476388  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:03.443509 1004591 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0203 22:38:03.443770 1004591 start.go:159] libmachine.API.Create for "kindnet-770968" (driver="docker")
	I0203 22:38:03.443804 1004591 client.go:168] LocalClient.Create starting
	I0203 22:38:03.443914 1004591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem
	I0203 22:38:03.443950 1004591 main.go:141] libmachine: Decoding PEM data...
	I0203 22:38:03.443967 1004591 main.go:141] libmachine: Parsing certificate...
	I0203 22:38:03.444024 1004591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15770-643340/.minikube/certs/cert.pem
	I0203 22:38:03.444041 1004591 main.go:141] libmachine: Decoding PEM data...
	I0203 22:38:03.444050 1004591 main.go:141] libmachine: Parsing certificate...
	I0203 22:38:03.444446 1004591 cli_runner.go:164] Run: docker network inspect kindnet-770968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0203 22:38:03.519482 1004591 cli_runner.go:211] docker network inspect kindnet-770968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0203 22:38:03.519566 1004591 network_create.go:281] running [docker network inspect kindnet-770968] to gather additional debugging logs...
	I0203 22:38:03.519592 1004591 cli_runner.go:164] Run: docker network inspect kindnet-770968
	W0203 22:38:03.594705 1004591 cli_runner.go:211] docker network inspect kindnet-770968 returned with exit code 1
	I0203 22:38:03.594756 1004591 network_create.go:284] error running [docker network inspect kindnet-770968]: docker network inspect kindnet-770968: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-770968 not found
	I0203 22:38:03.594773 1004591 network_create.go:286] output of [docker network inspect kindnet-770968]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-770968 not found
	
	** /stderr **
	I0203 22:38:03.594851 1004591 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0203 22:38:03.670584 1004591 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-27eee80fa331 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:28:fa:75:ad} reservation:<nil>}
	I0203 22:38:03.671765 1004591 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-516c71c0568d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:3f:36:3a:09} reservation:<nil>}
	I0203 22:38:03.672868 1004591 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000c4c030}
	I0203 22:38:03.672891 1004591 network_create.go:123] attempt to create docker network kindnet-770968 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0203 22:38:03.672938 1004591 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-770968 kindnet-770968
	I0203 22:38:03.785661 1004591 network_create.go:107] docker network kindnet-770968 192.168.67.0/24 created
	I0203 22:38:03.785690 1004591 kic.go:117] calculated static IP "192.168.67.2" for the "kindnet-770968" container
	I0203 22:38:03.785746 1004591 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0203 22:38:03.866191 1004591 cli_runner.go:164] Run: docker volume create kindnet-770968 --label name.minikube.sigs.k8s.io=kindnet-770968 --label created_by.minikube.sigs.k8s.io=true
	I0203 22:38:03.943524 1004591 oci.go:103] Successfully created a docker volume kindnet-770968
	I0203 22:38:03.943609 1004591 cli_runner.go:164] Run: docker run --rm --name kindnet-770968-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-770968 --entrypoint /usr/bin/test -v kindnet-770968:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -d /var/lib
	I0203 22:38:04.653592 1004591 oci.go:107] Successfully prepared a docker volume kindnet-770968
	I0203 22:38:04.653672 1004591 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 22:38:04.653703 1004591 kic.go:190] Starting extracting preloaded images to volume ...
	I0203 22:38:04.653800 1004591 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15770-643340/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-770968:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0203 22:38:09.013459  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:11.392017  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:06.977114  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:07.476494  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:07.976424  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:08.476707  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:08.976968  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:09.476478  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:09.976259  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:10.477171  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:10.977273  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:11.477102  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:09.928372 1004591 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15770-643340/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-770968:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.27447243s)
	I0203 22:38:09.928408 1004591 kic.go:199] duration metric: took 5.274701 seconds to extract preloaded images to volume
	W0203 22:38:09.928565 1004591 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0203 22:38:09.928701 1004591 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0203 22:38:10.068005 1004591 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-770968 --name kindnet-770968 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-770968 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-770968 --network kindnet-770968 --ip 192.168.67.2 --volume kindnet-770968:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8
	I0203 22:38:10.570837 1004591 cli_runner.go:164] Run: docker container inspect kindnet-770968 --format={{.State.Running}}
	I0203 22:38:10.658192 1004591 cli_runner.go:164] Run: docker container inspect kindnet-770968 --format={{.State.Status}}
	I0203 22:38:10.731105 1004591 cli_runner.go:164] Run: docker exec kindnet-770968 stat /var/lib/dpkg/alternatives/iptables
	I0203 22:38:10.835591 1004591 oci.go:144] the created container "kindnet-770968" has a running status.
	I0203 22:38:10.835627 1004591 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa...
	I0203 22:38:11.045195 1004591 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0203 22:38:11.184176 1004591 cli_runner.go:164] Run: docker container inspect kindnet-770968 --format={{.State.Status}}
	I0203 22:38:11.261864 1004591 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0203 22:38:11.261893 1004591 kic_runner.go:114] Args: [docker exec --privileged kindnet-770968 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0203 22:38:11.394931 1004591 cli_runner.go:164] Run: docker container inspect kindnet-770968 --format={{.State.Status}}
	I0203 22:38:11.468436 1004591 machine.go:88] provisioning docker machine ...
	I0203 22:38:11.468520 1004591 ubuntu.go:169] provisioning hostname "kindnet-770968"
	I0203 22:38:11.468600 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:11.545680 1004591 main.go:141] libmachine: Using SSH client type: native
	I0203 22:38:11.545906 1004591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33336 <nil> <nil>}
	I0203 22:38:11.545929 1004591 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-770968 && echo "kindnet-770968" | sudo tee /etc/hostname
	I0203 22:38:11.690578 1004591 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-770968
	
	I0203 22:38:11.690678 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:11.763689 1004591 main.go:141] libmachine: Using SSH client type: native
	I0203 22:38:11.763865 1004591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33336 <nil> <nil>}
	I0203 22:38:11.763888 1004591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-770968' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-770968/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-770968' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 22:38:11.892595 1004591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 22:38:11.892629 1004591 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15770-643340/.minikube CaCertPath:/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15770-643340/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15770-643340/.minikube}
	I0203 22:38:11.892654 1004591 ubuntu.go:177] setting up certificates
	I0203 22:38:11.892665 1004591 provision.go:83] configureAuth start
	I0203 22:38:11.892726 1004591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-770968
	I0203 22:38:11.963212 1004591 provision.go:138] copyHostCerts
	I0203 22:38:11.963277 1004591 exec_runner.go:144] found /home/jenkins/minikube-integration/15770-643340/.minikube/ca.pem, removing ...
	I0203 22:38:11.963292 1004591 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15770-643340/.minikube/ca.pem
	I0203 22:38:11.963362 1004591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15770-643340/.minikube/ca.pem (1082 bytes)
	I0203 22:38:11.963457 1004591 exec_runner.go:144] found /home/jenkins/minikube-integration/15770-643340/.minikube/cert.pem, removing ...
	I0203 22:38:11.963466 1004591 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15770-643340/.minikube/cert.pem
	I0203 22:38:11.963488 1004591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15770-643340/.minikube/cert.pem (1123 bytes)
	I0203 22:38:11.963549 1004591 exec_runner.go:144] found /home/jenkins/minikube-integration/15770-643340/.minikube/key.pem, removing ...
	I0203 22:38:11.963556 1004591 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15770-643340/.minikube/key.pem
	I0203 22:38:11.963576 1004591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15770-643340/.minikube/key.pem (1679 bytes)
	I0203 22:38:11.963630 1004591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15770-643340/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca-key.pem org=jenkins.kindnet-770968 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-770968]
	I0203 22:38:12.292618 1004591 provision.go:172] copyRemoteCerts
	I0203 22:38:12.292682 1004591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 22:38:12.292731 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:12.364692 1004591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa Username:docker}
	I0203 22:38:12.456649 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0203 22:38:12.475748 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0203 22:38:12.495875 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 22:38:12.520098 1004591 provision.go:86] duration metric: configureAuth took 627.416996ms
	I0203 22:38:12.520131 1004591 ubuntu.go:193] setting minikube options for container-runtime
	I0203 22:38:12.520405 1004591 config.go:180] Loaded profile config "kindnet-770968": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:38:12.520477 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:12.597442 1004591 main.go:141] libmachine: Using SSH client type: native
	I0203 22:38:12.597639 1004591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33336 <nil> <nil>}
	I0203 22:38:12.597655 1004591 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 22:38:12.729193 1004591 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0203 22:38:12.729221 1004591 ubuntu.go:71] root file system type: overlay
	I0203 22:38:12.729450 1004591 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 22:38:12.729520 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:12.805224 1004591 main.go:141] libmachine: Using SSH client type: native
	I0203 22:38:12.805370 1004591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33336 <nil> <nil>}
	I0203 22:38:12.805430 1004591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 22:38:12.942364 1004591 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 22:38:12.942439 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:11.976458  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:12.476409  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:12.976521  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:13.476955  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:13.977055  993024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 22:38:14.148500  993024 kubeadm.go:1073] duration metric: took 12.710452798s to wait for elevateKubeSystemPrivileges.
	I0203 22:38:14.148538  993024 kubeadm.go:403] StartCluster complete in 27.462557503s
	I0203 22:38:14.148563  993024 settings.go:142] acquiring lock: {Name:mkf92d82d8749aa11cbf8d7cc1c5c387b3a944f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:14.148652  993024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15770-643340/kubeconfig
	I0203 22:38:14.150281  993024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/kubeconfig: {Name:mk7b0a220bbb894990ed89116f6b1e42d435549f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:14.151672  993024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0203 22:38:14.151961  993024 config.go:180] Loaded profile config "auto-770968": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:38:14.152014  993024 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0203 22:38:14.152086  993024 addons.go:65] Setting storage-provisioner=true in profile "auto-770968"
	I0203 22:38:14.152120  993024 addons.go:227] Setting addon storage-provisioner=true in "auto-770968"
	W0203 22:38:14.152128  993024 addons.go:236] addon storage-provisioner should already be in state true
	I0203 22:38:14.152178  993024 host.go:66] Checking if "auto-770968" exists ...
	I0203 22:38:14.152720  993024 addons.go:65] Setting default-storageclass=true in profile "auto-770968"
	I0203 22:38:14.152744  993024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-770968"
	I0203 22:38:14.152785  993024 cli_runner.go:164] Run: docker container inspect auto-770968 --format={{.State.Status}}
	I0203 22:38:14.153024  993024 cli_runner.go:164] Run: docker container inspect auto-770968 --format={{.State.Status}}
	I0203 22:38:14.249833  993024 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 22:38:14.251861  993024 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 22:38:14.251889  993024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 22:38:14.251955  993024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770968
	I0203 22:38:14.267192  993024 addons.go:227] Setting addon default-storageclass=true in "auto-770968"
	W0203 22:38:14.267227  993024 addons.go:236] addon default-storageclass should already be in state true
	I0203 22:38:14.267262  993024 host.go:66] Checking if "auto-770968" exists ...
	I0203 22:38:14.267778  993024 cli_runner.go:164] Run: docker container inspect auto-770968 --format={{.State.Status}}
	I0203 22:38:14.351780  993024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33331 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/auto-770968/id_rsa Username:docker}
	I0203 22:38:14.376996  993024 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 22:38:14.377023  993024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 22:38:14.377082  993024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770968
	I0203 22:38:14.450495  993024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0203 22:38:14.479896  993024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33331 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/auto-770968/id_rsa Username:docker}
	I0203 22:38:14.553159  993024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 22:38:14.655981  993024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 22:38:14.742628  993024 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-770968" context rescaled to 1 replicas
	I0203 22:38:14.742674  993024 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 22:38:14.745012  993024 out.go:177] * Verifying Kubernetes components...
	I0203 22:38:13.393177  979588 pod_ready.go:102] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"False"
	I0203 22:38:14.892284  979588 pod_ready.go:92] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.892315  979588 pod_ready.go:81] duration metric: took 12.516003387s waiting for pod "etcd-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.892325  979588 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.896342  979588 pod_ready.go:92] pod "kube-apiserver-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.896361  979588 pod_ready.go:81] duration metric: took 4.029948ms waiting for pod "kube-apiserver-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.896372  979588 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.900459  979588 pod_ready.go:92] pod "kube-controller-manager-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.900476  979588 pod_ready.go:81] duration metric: took 4.097977ms waiting for pod "kube-controller-manager-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.900488  979588 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6q8r8" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.904379  979588 pod_ready.go:92] pod "kube-proxy-6q8r8" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.904395  979588 pod_ready.go:81] duration metric: took 3.900784ms waiting for pod "kube-proxy-6q8r8" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.904404  979588 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.908308  979588 pod_ready.go:92] pod "kube-scheduler-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:14.908329  979588 pod_ready.go:81] duration metric: took 3.918339ms waiting for pod "kube-scheduler-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:14.908336  979588 pod_ready.go:38] duration metric: took 12.545672865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:14.908355  979588 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 22:38:14.915923  979588 ops.go:34] apiserver oom_adj: -16
	I0203 22:38:14.915946  979588 kubeadm.go:637] restartCluster took 55.680977837s
	I0203 22:38:14.915955  979588 kubeadm.go:403] StartCluster complete in 55.764379154s
	I0203 22:38:14.915973  979588 settings.go:142] acquiring lock: {Name:mkf92d82d8749aa11cbf8d7cc1c5c387b3a944f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:14.916045  979588 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15770-643340/kubeconfig
	I0203 22:38:14.917278  979588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/kubeconfig: {Name:mk7b0a220bbb894990ed89116f6b1e42d435549f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:14.917594  979588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0203 22:38:14.917805  979588 config.go:180] Loaded profile config "pause-868256": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:38:14.917754  979588 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0203 22:38:14.917856  979588 addons.go:65] Setting storage-provisioner=true in profile "pause-868256"
	I0203 22:38:14.917860  979588 addons.go:65] Setting default-storageclass=true in profile "pause-868256"
	I0203 22:38:14.917878  979588 addons.go:227] Setting addon storage-provisioner=true in "pause-868256"
	I0203 22:38:14.917884  979588 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-868256"
	W0203 22:38:14.917890  979588 addons.go:236] addon storage-provisioner should already be in state true
	I0203 22:38:14.917954  979588 host.go:66] Checking if "pause-868256" exists ...
	I0203 22:38:14.918184  979588 cli_runner.go:164] Run: docker container inspect pause-868256 --format={{.State.Status}}
	I0203 22:38:14.918353  979588 cli_runner.go:164] Run: docker container inspect pause-868256 --format={{.State.Status}}
	I0203 22:38:14.918447  979588 kapi.go:59] client config for pause-868256: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.crt", KeyFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.key", CAFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1891540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 22:38:14.921602  979588 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-868256" context rescaled to 1 replicas
	I0203 22:38:14.921645  979588 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 22:38:14.924729  979588 out.go:177] * Verifying Kubernetes components...
	I0203 22:38:14.926880  979588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 22:38:15.030594  979588 node_ready.go:35] waiting up to 6m0s for node "pause-868256" to be "Ready" ...
	I0203 22:38:15.030687  979588 start.go:892] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0203 22:38:15.043534  979588 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 22:38:14.747583  993024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 22:38:15.977908  993024 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.527368471s)
	I0203 22:38:15.977942  993024 start.go:919] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0203 22:38:16.053807  993024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.397786019s)
	I0203 22:38:16.053868  993024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.500683626s)
	I0203 22:38:16.056686  993024 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0203 22:38:16.054285  993024 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.306658637s)
	I0203 22:38:15.045509  979588 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 22:38:15.045533  979588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 22:38:15.045599  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:38:15.052408  979588 kapi.go:59] client config for pause-868256: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.crt", KeyFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/profiles/pause-868256/client.key", CAFile:"/home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1891540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 22:38:15.055523  979588 addons.go:227] Setting addon default-storageclass=true in "pause-868256"
	W0203 22:38:15.055547  979588 addons.go:236] addon default-storageclass should already be in state true
	I0203 22:38:15.055578  979588 host.go:66] Checking if "pause-868256" exists ...
	I0203 22:38:15.056030  979588 cli_runner.go:164] Run: docker container inspect pause-868256 --format={{.State.Status}}
	I0203 22:38:15.095382  979588 node_ready.go:49] node "pause-868256" has status "Ready":"True"
	I0203 22:38:15.095405  979588 node_ready.go:38] duration metric: took 64.776768ms waiting for node "pause-868256" to be "Ready" ...
	I0203 22:38:15.095415  979588 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:15.170238  979588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/pause-868256/id_rsa Username:docker}
	I0203 22:38:15.180586  979588 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 22:38:15.180615  979588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 22:38:15.180676  979588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-868256
	I0203 22:38:15.281164  979588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/pause-868256/id_rsa Username:docker}
	I0203 22:38:15.282096  979588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 22:38:15.294779  979588 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:15.398048  979588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 22:38:15.690599  979588 pod_ready.go:92] pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:15.690623  979588 pod_ready.go:81] duration metric: took 395.806992ms waiting for pod "coredns-787d4945fb-dd5vv" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:15.690637  979588 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.091573  979588 pod_ready.go:92] pod "etcd-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:16.091597  979588 pod_ready.go:81] duration metric: took 400.951095ms waiting for pod "etcd-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.091610  979588 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.302870  979588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.020734682s)
	I0203 22:38:16.305381  979588 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0203 22:38:16.306921  979588 addons.go:492] enable addons completed in 1.389182055s: enabled=[storage-provisioner default-storageclass]
	I0203 22:38:13.013687 1004591 main.go:141] libmachine: Using SSH client type: native
	I0203 22:38:13.013894 1004591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 33336 <nil> <nil>}
	I0203 22:38:13.013923 1004591 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 22:38:13.742305 1004591 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 22:38:12.937576440 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0203 22:38:13.742351 1004591 machine.go:91] provisioned docker machine in 2.273879376s
	I0203 22:38:13.742362 1004591 client.go:171] LocalClient.Create took 10.298549152s
	I0203 22:38:13.742384 1004591 start.go:167] duration metric: libmachine.API.Create for "kindnet-770968" took 10.29861439s
	I0203 22:38:13.742394 1004591 start.go:300] post-start starting for "kindnet-770968" (driver="docker")
	I0203 22:38:13.742406 1004591 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 22:38:13.742469 1004591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 22:38:13.742528 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:13.814394 1004591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa Username:docker}
	I0203 22:38:13.912829 1004591 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 22:38:13.916310 1004591 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0203 22:38:13.916344 1004591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0203 22:38:13.916358 1004591 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0203 22:38:13.916366 1004591 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0203 22:38:13.916378 1004591 filesync.go:126] Scanning /home/jenkins/minikube-integration/15770-643340/.minikube/addons for local assets ...
	I0203 22:38:13.916447 1004591 filesync.go:126] Scanning /home/jenkins/minikube-integration/15770-643340/.minikube/files for local assets ...
	I0203 22:38:13.916538 1004591 filesync.go:149] local asset: /home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/6500652.pem -> 6500652.pem in /etc/ssl/certs
	I0203 22:38:13.916645 1004591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 22:38:13.924601 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/6500652.pem --> /etc/ssl/certs/6500652.pem (1708 bytes)
	I0203 22:38:13.944178 1004591 start.go:303] post-start completed in 201.758507ms
	I0203 22:38:13.944627 1004591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-770968
	I0203 22:38:14.037008 1004591 profile.go:148] Saving config to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/config.json ...
	I0203 22:38:14.037300 1004591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 22:38:14.037360 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:14.130516 1004591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa Username:docker}
	I0203 22:38:14.226028 1004591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0203 22:38:14.230580 1004591 start.go:128] duration metric: createHost completed in 10.790100425s
	I0203 22:38:14.230608 1004591 start.go:83] releasing machines lock for "kindnet-770968", held for 10.790366482s
	I0203 22:38:14.230679 1004591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-770968
	I0203 22:38:14.342079 1004591 ssh_runner.go:195] Run: cat /version.json
	I0203 22:38:14.342157 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:14.342169 1004591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 22:38:14.342263 1004591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-770968
	I0203 22:38:14.451187 1004591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa Username:docker}
	I0203 22:38:14.453252 1004591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33336 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/kindnet-770968/id_rsa Username:docker}
	I0203 22:38:14.548593 1004591 ssh_runner.go:195] Run: systemctl --version
	I0203 22:38:14.582947 1004591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 22:38:14.588325 1004591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0203 22:38:14.609903 1004591 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0203 22:38:14.610060 1004591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 22:38:14.617084 1004591 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0203 22:38:14.630485 1004591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 22:38:14.655282 1004591 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0203 22:38:14.655320 1004591 start.go:483] detecting cgroup driver to use...
	I0203 22:38:14.655357 1004591 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 22:38:14.655499 1004591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 22:38:14.674360 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0203 22:38:14.683691 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 22:38:14.693014 1004591 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 22:38:14.693086 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 22:38:14.702223 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 22:38:14.710604 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 22:38:14.719403 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 22:38:14.727956 1004591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 22:38:14.740081 1004591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 22:38:14.752368 1004591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 22:38:14.762909 1004591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 22:38:14.772602 1004591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 22:38:14.959370 1004591 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 22:38:15.084053 1004591 start.go:483] detecting cgroup driver to use...
	I0203 22:38:15.084115 1004591 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 22:38:15.084169 1004591 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 22:38:15.101931 1004591 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0203 22:38:15.102002 1004591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 22:38:15.126009 1004591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 22:38:15.152560 1004591 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 22:38:15.285995 1004591 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 22:38:15.401814 1004591 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 22:38:15.401852 1004591 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0203 22:38:15.426705 1004591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 22:38:15.550281 1004591 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 22:38:15.845337 1004591 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 22:38:15.952066 1004591 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 22:38:16.094060 1004591 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 22:38:16.217618 1004591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 22:38:16.340684 1004591 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 22:38:16.353656 1004591 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 22:38:16.353726 1004591 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 22:38:16.357274 1004591 start.go:551] Will wait 60s for crictl version
	I0203 22:38:16.357347 1004591 ssh_runner.go:195] Run: which crictl
	I0203 22:38:16.360768 1004591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 22:38:16.465833 1004591 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0203 22:38:16.465891 1004591 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 22:38:16.498201 1004591 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 22:38:16.059533  993024 addons.go:492] enable addons completed in 1.907506022s: enabled=[default-storageclass storage-provisioner]
	I0203 22:38:16.058234  993024 node_ready.go:35] waiting up to 15m0s for node "auto-770968" to be "Ready" ...
	I0203 22:38:16.062624  993024 node_ready.go:49] node "auto-770968" has status "Ready":"True"
	I0203 22:38:16.062647  993024 node_ready.go:38] duration metric: took 3.077522ms waiting for node "auto-770968" to be "Ready" ...
	I0203 22:38:16.062657  993024 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:16.069734  993024 pod_ready.go:78] waiting up to 15m0s for pod "coredns-787d4945fb-mgggf" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.534971 1004591 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0203 22:38:16.535057 1004591 cli_runner.go:164] Run: docker network inspect kindnet-770968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0203 22:38:16.620516 1004591 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0203 22:38:16.624113 1004591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 22:38:16.635603 1004591 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 22:38:16.635699 1004591 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 22:38:16.666395 1004591 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0203 22:38:16.666421 1004591 docker.go:560] Images already preloaded, skipping extraction
	I0203 22:38:16.666595 1004591 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 22:38:16.700040 1004591 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0203 22:38:16.700060 1004591 cache_images.go:84] Images are preloaded, skipping loading
	I0203 22:38:16.700107 1004591 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 22:38:16.803912 1004591 cni.go:84] Creating CNI manager for "kindnet"
	I0203 22:38:16.803946 1004591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0203 22:38:16.803968 1004591 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-770968 NodeName:kindnet-770968 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0203 22:38:16.804188 1004591 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kindnet-770968"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 22:38:16.804301 1004591 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kindnet-770968 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:kindnet-770968 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0203 22:38:16.804367 1004591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0203 22:38:16.816024 1004591 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 22:38:16.816088 1004591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 22:38:16.824791 1004591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (446 bytes)
	I0203 22:38:16.842350 1004591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 22:38:16.860002 1004591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2090 bytes)
	I0203 22:38:16.875820 1004591 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0203 22:38:16.879198 1004591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 22:38:16.890903 1004591 certs.go:56] Setting up /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968 for IP: 192.168.67.2
	I0203 22:38:16.890935 1004591 certs.go:186] acquiring lock for shared ca certs: {Name:mke70fce29a277706b809a1e09202f97eb3de8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:16.891085 1004591 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15770-643340/.minikube/ca.key
	I0203 22:38:16.891122 1004591 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15770-643340/.minikube/proxy-client-ca.key
	I0203 22:38:16.891196 1004591 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.key
	I0203 22:38:16.891216 1004591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt with IP's: []
	I0203 22:38:17.095005 1004591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt ...
	I0203 22:38:17.095035 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: {Name:mk3c77b8eff68bc1ccdd46c28205301dc7974378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:17.095187 1004591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.key ...
	I0203 22:38:17.095198 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.key: {Name:mkfefcf2627b205e82a91c5f0410ce518e0242b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:17.095266 1004591 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.key.c7fa3a9e
	I0203 22:38:17.095280 1004591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0203 22:38:17.270396 1004591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.crt.c7fa3a9e ...
	I0203 22:38:17.270429 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.crt.c7fa3a9e: {Name:mk22d64d141d8cb1755d91d94ffbd1df94ace2be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:17.270575 1004591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.key.c7fa3a9e ...
	I0203 22:38:17.270586 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.key.c7fa3a9e: {Name:mkd0dd7be4874ef70999fdcfaf854f142843fbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:17.270653 1004591 certs.go:333] copying /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.crt
	I0203 22:38:17.270707 1004591 certs.go:337] copying /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.key
	I0203 22:38:17.270750 1004591 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.key
	I0203 22:38:17.270763 1004591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.crt with IP's: []
	I0203 22:38:17.686011 1004591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.crt ...
	I0203 22:38:17.686042 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.crt: {Name:mkc6d2883d85d953c47cd456795fcd75eaaf3eda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:17.686234 1004591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.key ...
	I0203 22:38:17.686245 1004591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.key: {Name:mk54b39abd9e8bdaf7fb30413a204e94320d7218 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 22:38:17.686401 1004591 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/650065.pem (1338 bytes)
	W0203 22:38:17.686438 1004591 certs.go:397] ignoring /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/650065_empty.pem, impossibly tiny 0 bytes
	I0203 22:38:17.686449 1004591 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca-key.pem (1679 bytes)
	I0203 22:38:17.686472 1004591 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/ca.pem (1082 bytes)
	I0203 22:38:17.686494 1004591 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/cert.pem (1123 bytes)
	I0203 22:38:17.686518 1004591 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/certs/home/jenkins/minikube-integration/15770-643340/.minikube/certs/key.pem (1679 bytes)
	I0203 22:38:17.686550 1004591 certs.go:401] found cert: /home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/6500652.pem (1708 bytes)
	I0203 22:38:17.687197 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0203 22:38:17.707800 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 22:38:17.726274 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 22:38:17.745415 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 22:38:17.764416 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 22:38:17.783659 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0203 22:38:17.803874 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 22:38:17.823078 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0203 22:38:17.841596 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/certs/650065.pem --> /usr/share/ca-certificates/650065.pem (1338 bytes)
	I0203 22:38:17.864354 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/ssl/certs/6500652.pem --> /usr/share/ca-certificates/6500652.pem (1708 bytes)
	I0203 22:38:17.887733 1004591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 22:38:17.909826 1004591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 22:38:17.924689 1004591 ssh_runner.go:195] Run: openssl version
	I0203 22:38:17.929986 1004591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6500652.pem && ln -fs /usr/share/ca-certificates/6500652.pem /etc/ssl/certs/6500652.pem"
	I0203 22:38:17.938104 1004591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6500652.pem
	I0203 22:38:17.941617 1004591 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb  3 22:12 /usr/share/ca-certificates/6500652.pem
	I0203 22:38:17.941673 1004591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6500652.pem
	I0203 22:38:17.946718 1004591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6500652.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 22:38:17.954962 1004591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 22:38:17.962787 1004591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 22:38:17.970334 1004591 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb  3 22:08 /usr/share/ca-certificates/minikubeCA.pem
	I0203 22:38:17.970391 1004591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 22:38:17.975236 1004591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 22:38:17.982850 1004591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/650065.pem && ln -fs /usr/share/ca-certificates/650065.pem /etc/ssl/certs/650065.pem"
	I0203 22:38:17.991072 1004591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/650065.pem
	I0203 22:38:17.994888 1004591 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb  3 22:12 /usr/share/ca-certificates/650065.pem
	I0203 22:38:17.994957 1004591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/650065.pem
	I0203 22:38:18.000693 1004591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/650065.pem /etc/ssl/certs/51391683.0"
	I0203 22:38:18.009336 1004591 kubeadm.go:401] StartCluster: {Name:kindnet-770968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kindnet-770968 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 22:38:18.009495 1004591 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 22:38:18.033692 1004591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 22:38:18.042084 1004591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 22:38:18.050389 1004591 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0203 22:38:18.050464 1004591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 22:38:18.059003 1004591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 22:38:18.059054 1004591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0203 22:38:18.115346 1004591 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0203 22:38:18.115501 1004591 kubeadm.go:322] [preflight] Running pre-flight checks
	I0203 22:38:18.150202 1004591 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0203 22:38:18.150316 1004591 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1027-gcp
	I0203 22:38:18.150386 1004591 kubeadm.go:322] OS: Linux
	I0203 22:38:18.150470 1004591 kubeadm.go:322] CGROUPS_CPU: enabled
	I0203 22:38:18.150553 1004591 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0203 22:38:18.150635 1004591 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0203 22:38:18.150721 1004591 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0203 22:38:18.150791 1004591 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0203 22:38:18.150854 1004591 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0203 22:38:18.150916 1004591 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0203 22:38:18.150987 1004591 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0203 22:38:18.151061 1004591 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0203 22:38:18.220873 1004591 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 22:38:18.221009 1004591 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 22:38:18.221120 1004591 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 22:38:18.367754 1004591 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 22:38:16.490944  979588 pod_ready.go:92] pod "kube-apiserver-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:16.490967  979588 pod_ready.go:81] duration metric: took 399.350207ms waiting for pod "kube-apiserver-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.490977  979588 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.895568  979588 pod_ready.go:92] pod "kube-controller-manager-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:16.895592  979588 pod_ready.go:81] duration metric: took 404.606919ms waiting for pod "kube-controller-manager-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:16.895606  979588 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6q8r8" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:17.291424  979588 pod_ready.go:92] pod "kube-proxy-6q8r8" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:17.291453  979588 pod_ready.go:81] duration metric: took 395.838528ms waiting for pod "kube-proxy-6q8r8" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:17.291467  979588 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:17.690304  979588 pod_ready.go:92] pod "kube-scheduler-pause-868256" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:17.690326  979588 pod_ready.go:81] duration metric: took 398.850097ms waiting for pod "kube-scheduler-pause-868256" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:17.690333  979588 pod_ready.go:38] duration metric: took 2.59490922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:17.690353  979588 api_server.go:51] waiting for apiserver process to appear ...
	I0203 22:38:17.690389  979588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 22:38:17.700729  979588 api_server.go:71] duration metric: took 2.779046006s to wait for apiserver process to appear ...
	I0203 22:38:17.700770  979588 api_server.go:87] waiting for apiserver healthz status ...
	I0203 22:38:17.700785  979588 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0203 22:38:17.705049  979588 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0203 22:38:17.706096  979588 api_server.go:140] control plane version: v1.26.1
	I0203 22:38:17.706119  979588 api_server.go:130] duration metric: took 5.342484ms to wait for apiserver health ...
	I0203 22:38:17.706130  979588 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 22:38:17.893912  979588 system_pods.go:59] 7 kube-system pods found
	I0203 22:38:17.893946  979588 system_pods.go:61] "coredns-787d4945fb-dd5vv" [aed13814-af10-4c1c-9548-20630079cd3c] Running
	I0203 22:38:17.893953  979588 system_pods.go:61] "etcd-pause-868256" [595c2af1-3166-4d59-969d-bc282f646ed5] Running
	I0203 22:38:17.893959  979588 system_pods.go:61] "kube-apiserver-pause-868256" [2a4d2f7d-025b-47a7-99c5-70079183e798] Running
	I0203 22:38:17.893966  979588 system_pods.go:61] "kube-controller-manager-pause-868256" [8f8a526f-dea5-4e08-8258-dd4e4654ae32] Running
	I0203 22:38:17.893972  979588 system_pods.go:61] "kube-proxy-6q8r8" [a9c6e5f1-fd98-4bc1-aae7-b0485f877616] Running
	I0203 22:38:17.893978  979588 system_pods.go:61] "kube-scheduler-pause-868256" [72bf6a79-cdaf-46bb-93fc-f8d402880694] Running
	I0203 22:38:17.893984  979588 system_pods.go:61] "storage-provisioner" [48da2fca-7198-449d-bebd-84e7ce3d61e0] Running
	I0203 22:38:17.893991  979588 system_pods.go:74] duration metric: took 187.854082ms to wait for pod list to return data ...
	I0203 22:38:17.894002  979588 default_sa.go:34] waiting for default service account to be created ...
	I0203 22:38:18.090160  979588 default_sa.go:45] found service account: "default"
	I0203 22:38:18.090187  979588 default_sa.go:55] duration metric: took 196.177872ms for default service account to be created ...
	I0203 22:38:18.090198  979588 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 22:38:18.293177  979588 system_pods.go:86] 7 kube-system pods found
	I0203 22:38:18.293208  979588 system_pods.go:89] "coredns-787d4945fb-dd5vv" [aed13814-af10-4c1c-9548-20630079cd3c] Running
	I0203 22:38:18.293216  979588 system_pods.go:89] "etcd-pause-868256" [595c2af1-3166-4d59-969d-bc282f646ed5] Running
	I0203 22:38:18.293224  979588 system_pods.go:89] "kube-apiserver-pause-868256" [2a4d2f7d-025b-47a7-99c5-70079183e798] Running
	I0203 22:38:18.293232  979588 system_pods.go:89] "kube-controller-manager-pause-868256" [8f8a526f-dea5-4e08-8258-dd4e4654ae32] Running
	I0203 22:38:18.293238  979588 system_pods.go:89] "kube-proxy-6q8r8" [a9c6e5f1-fd98-4bc1-aae7-b0485f877616] Running
	I0203 22:38:18.293244  979588 system_pods.go:89] "kube-scheduler-pause-868256" [72bf6a79-cdaf-46bb-93fc-f8d402880694] Running
	I0203 22:38:18.293251  979588 system_pods.go:89] "storage-provisioner" [48da2fca-7198-449d-bebd-84e7ce3d61e0] Running
	I0203 22:38:18.293262  979588 system_pods.go:126] duration metric: took 203.057207ms to wait for k8s-apps to be running ...
	I0203 22:38:18.293276  979588 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 22:38:18.293331  979588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 22:38:18.305047  979588 system_svc.go:56] duration metric: took 11.756989ms WaitForService to wait for kubelet.
	I0203 22:38:18.305082  979588 kubeadm.go:578] duration metric: took 3.383408222s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0203 22:38:18.305108  979588 node_conditions.go:102] verifying NodePressure condition ...
	I0203 22:38:18.491041  979588 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0203 22:38:18.491067  979588 node_conditions.go:123] node cpu capacity is 8
	I0203 22:38:18.491078  979588 node_conditions.go:105] duration metric: took 185.954067ms to run NodePressure ...
	I0203 22:38:18.491092  979588 start.go:228] waiting for startup goroutines ...
	I0203 22:38:18.491102  979588 start.go:233] waiting for cluster config update ...
	I0203 22:38:18.491115  979588 start.go:240] writing updated cluster config ...
	I0203 22:38:18.491444  979588 ssh_runner.go:195] Run: rm -f paused
	I0203 22:38:18.544355  979588 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0203 22:38:18.546936  979588 out.go:177] * Done! kubectl is now configured to use "pause-868256" cluster and "default" namespace by default
	I0203 22:38:18.079891  993024 pod_ready.go:92] pod "coredns-787d4945fb-mgggf" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:18.079917  993024 pod_ready.go:81] duration metric: took 2.010139572s waiting for pod "coredns-787d4945fb-mgggf" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.079929  993024 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.085179  993024 pod_ready.go:92] pod "etcd-auto-770968" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:18.085203  993024 pod_ready.go:81] duration metric: took 5.265511ms waiting for pod "etcd-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.085221  993024 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.090508  993024 pod_ready.go:92] pod "kube-apiserver-auto-770968" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:18.090537  993024 pod_ready.go:81] duration metric: took 5.307943ms waiting for pod "kube-apiserver-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.090549  993024 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.095978  993024 pod_ready.go:92] pod "kube-controller-manager-auto-770968" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:18.096004  993024 pod_ready.go:81] duration metric: took 5.445961ms waiting for pod "kube-controller-manager-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.096017  993024 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-9xzdr" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.101129  993024 pod_ready.go:92] pod "kube-proxy-9xzdr" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:18.101150  993024 pod_ready.go:81] duration metric: took 5.12452ms waiting for pod "kube-proxy-9xzdr" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.101168  993024 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.477216  993024 pod_ready.go:92] pod "kube-scheduler-auto-770968" in "kube-system" namespace has status "Ready":"True"
	I0203 22:38:18.477242  993024 pod_ready.go:81] duration metric: took 376.062648ms waiting for pod "kube-scheduler-auto-770968" in "kube-system" namespace to be "Ready" ...
	I0203 22:38:18.477255  993024 pod_ready.go:38] duration metric: took 2.4145873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 22:38:18.477278  993024 api_server.go:51] waiting for apiserver process to appear ...
	I0203 22:38:18.477325  993024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 22:38:18.487665  993024 api_server.go:71] duration metric: took 3.744954873s to wait for apiserver process to appear ...
	I0203 22:38:18.487694  993024 api_server.go:87] waiting for apiserver healthz status ...
	I0203 22:38:18.487710  993024 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0203 22:38:18.492370  993024 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0203 22:38:18.493311  993024 api_server.go:140] control plane version: v1.26.1
	I0203 22:38:18.493332  993024 api_server.go:130] duration metric: took 5.632737ms to wait for apiserver health ...
	I0203 22:38:18.493342  993024 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 22:38:18.681678  993024 system_pods.go:59] 7 kube-system pods found
	I0203 22:38:18.681713  993024 system_pods.go:61] "coredns-787d4945fb-mgggf" [5ed178b8-b7da-4220-8918-3ae813bad2dc] Running
	I0203 22:38:18.681720  993024 system_pods.go:61] "etcd-auto-770968" [7c38d51e-b3d3-44f2-a341-cfa8ea7214a4] Running
	I0203 22:38:18.681727  993024 system_pods.go:61] "kube-apiserver-auto-770968" [ef37a8d2-175c-4870-af06-cca9da2213d3] Running
	I0203 22:38:18.681735  993024 system_pods.go:61] "kube-controller-manager-auto-770968" [edf88d0e-aab2-4513-b404-12cbb7f6b3fe] Running
	I0203 22:38:18.681742  993024 system_pods.go:61] "kube-proxy-9xzdr" [0422a10c-b5b6-40f8-9414-890e5edb3789] Running
	I0203 22:38:18.681749  993024 system_pods.go:61] "kube-scheduler-auto-770968" [57dcc5c1-699f-42bb-a2b1-913e211a730b] Running
	I0203 22:38:18.681753  993024 system_pods.go:61] "storage-provisioner" [db03fdab-2123-43e2-a481-7f60e9b8abd9] Running
	I0203 22:38:18.681758  993024 system_pods.go:74] duration metric: took 188.41201ms to wait for pod list to return data ...
	I0203 22:38:18.681772  993024 default_sa.go:34] waiting for default service account to be created ...
	I0203 22:38:18.879833  993024 default_sa.go:45] found service account: "default"
	I0203 22:38:18.879867  993024 default_sa.go:55] duration metric: took 198.086813ms for default service account to be created ...
	I0203 22:38:18.879878  993024 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 22:38:19.079562  993024 system_pods.go:86] 7 kube-system pods found
	I0203 22:38:19.079597  993024 system_pods.go:89] "coredns-787d4945fb-mgggf" [5ed178b8-b7da-4220-8918-3ae813bad2dc] Running
	I0203 22:38:19.079605  993024 system_pods.go:89] "etcd-auto-770968" [7c38d51e-b3d3-44f2-a341-cfa8ea7214a4] Running
	I0203 22:38:19.079611  993024 system_pods.go:89] "kube-apiserver-auto-770968" [ef37a8d2-175c-4870-af06-cca9da2213d3] Running
	I0203 22:38:19.079620  993024 system_pods.go:89] "kube-controller-manager-auto-770968" [edf88d0e-aab2-4513-b404-12cbb7f6b3fe] Running
	I0203 22:38:19.079628  993024 system_pods.go:89] "kube-proxy-9xzdr" [0422a10c-b5b6-40f8-9414-890e5edb3789] Running
	I0203 22:38:19.079634  993024 system_pods.go:89] "kube-scheduler-auto-770968" [57dcc5c1-699f-42bb-a2b1-913e211a730b] Running
	I0203 22:38:19.079640  993024 system_pods.go:89] "storage-provisioner" [db03fdab-2123-43e2-a481-7f60e9b8abd9] Running
	I0203 22:38:19.079648  993024 system_pods.go:126] duration metric: took 199.764172ms to wait for k8s-apps to be running ...
	I0203 22:38:19.079659  993024 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 22:38:19.079703  993024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 22:38:19.090019  993024 system_svc.go:56] duration metric: took 10.348879ms WaitForService to wait for kubelet.
	I0203 22:38:19.090048  993024 kubeadm.go:578] duration metric: took 4.347346913s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0203 22:38:19.090074  993024 node_conditions.go:102] verifying NodePressure condition ...
	I0203 22:38:19.277411  993024 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0203 22:38:19.277444  993024 node_conditions.go:123] node cpu capacity is 8
	I0203 22:38:19.277460  993024 node_conditions.go:105] duration metric: took 187.381288ms to run NodePressure ...
	I0203 22:38:19.277474  993024 start.go:228] waiting for startup goroutines ...
	I0203 22:38:19.277482  993024 start.go:233] waiting for cluster config update ...
	I0203 22:38:19.277495  993024 start.go:240] writing updated cluster config ...
	I0203 22:38:19.277799  993024 ssh_runner.go:195] Run: rm -f paused
	I0203 22:38:19.343358  993024 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0203 22:38:19.346801  993024 out.go:177] * Done! kubectl is now configured to use "auto-770968" cluster and "default" namespace by default
	I0203 22:38:18.371085 1004591 out.go:204]   - Generating certificates and keys ...
	I0203 22:38:18.371220 1004591 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0203 22:38:18.371316 1004591 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0203 22:38:18.557117 1004591 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0203 22:38:18.757593 1004591 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0203 22:38:18.887155 1004591 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0203 22:38:19.057533 1004591 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0203 22:38:19.199747 1004591 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0203 22:38:19.199898 1004591 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kindnet-770968 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0203 22:38:19.395866 1004591 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0203 22:38:19.396071 1004591 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kindnet-770968 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0203 22:38:19.620143 1004591 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0203 22:38:19.712968 1004591 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0203 22:38:19.865949 1004591 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0203 22:38:19.866143 1004591 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 22:38:20.004411 1004591 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 22:38:20.377909 1004591 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 22:38:20.476205 1004591 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 22:38:20.639722 1004591 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 22:38:20.659246 1004591 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 22:38:20.660312 1004591 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 22:38:20.660403 1004591 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0203 22:38:20.767160 1004591 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-03 22:36:24 UTC, end at Fri 2023-02-03 22:38:23 UTC. --
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.633049977Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.633076265Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.633081922Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.633249673Z" level=info msg="Loading containers: start."
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.790653955Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.847019416Z" level=info msg="Loading containers: done."
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.880068408Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.880170354Z" level=info msg="Daemon has completed initialization"
	Feb 03 22:37:16 pause-868256 systemd[1]: Started Docker Application Container Engine.
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.900326467Z" level=info msg="API listen on [::]:2376"
	Feb 03 22:37:16 pause-868256 dockerd[4877]: time="2023-02-03T22:37:16.906076177Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 03 22:37:18 pause-868256 dockerd[4877]: time="2023-02-03T22:37:18.371758193Z" level=info msg="ignoring event" container=d0ec4fe6e67fa4f395d4d150b214490e9f1fbf20d8203653d6005c635ddc8628 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:39 pause-868256 dockerd[4877]: time="2023-02-03T22:37:39.545368649Z" level=info msg="ignoring event" container=b2e7a9f54a0419231060db56c69b54be5167fcfaddd3d1c6fa9c1b05363364fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.836014856Z" level=info msg="ignoring event" container=ec9da7ff44bd974a6c7738a0784f426043dcd53abacf1eb7797361c3d84a0b5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.837654540Z" level=info msg="ignoring event" container=22bb57467f2447dbae6d332677cf48e7d192f8ea7484eb6998ae4116c62183ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.841188556Z" level=info msg="ignoring event" container=65f599eb0eebff7f0068738e41c8ce5ac1384d19182283b24f7b5d74df1778a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.847483927Z" level=info msg="ignoring event" container=36c6f2ce6f7acad72c43e1117d4df8a4d65a22f3a8dc6d6b96a5728233e06ca4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.850472509Z" level=info msg="ignoring event" container=c638c5348fb7a44ef74083b0194d8a784bf238b23ec719ac4da429cb2233299b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.850515884Z" level=info msg="ignoring event" container=0364f8ab712b82038f0d44ed3b9a487c0a41355a9bf2c3871bc59cbe494bcd13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.963104917Z" level=info msg="ignoring event" container=dedaef110fcee9bafa404feb548f2906376d2c78796b863fcaa3eeb9dfae6f7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.963155127Z" level=info msg="ignoring event" container=e5363a998cd8779f1dfb21bfc557b173cfb95790a25259b71f05d6751e64d1e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.964922705Z" level=info msg="ignoring event" container=1a8a12bf42f57608542b94122ec09cf259f8a2147761e771499f9b85f78f6958 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:49 pause-868256 dockerd[4877]: time="2023-02-03T22:37:49.969399825Z" level=info msg="ignoring event" container=8f69d29f79237d9d49213cba679accdd983fcb185dbfc1c5307c4dc3bc005d57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:37:50 pause-868256 dockerd[4877]: time="2023-02-03T22:37:50.572502115Z" level=error msg="4a01e9080876caf1021d4aa3b4ba2a876f8bd761cd3605b526e58885ac293bcb cleanup: failed to delete container from containerd: no such container"
	Feb 03 22:37:54 pause-868256 dockerd[4877]: time="2023-02-03T22:37:54.773428186Z" level=info msg="ignoring event" container=900b5dd1be8ed33d38d7f7b4d0c08d1876dbee0c336ae09b5c912646dda06e91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	a511701d78f48       6e38f40d628db       7 seconds ago        Running             storage-provisioner       0                   797db591ad316
	47ced30f2dd57       5185b96f0becf       21 seconds ago       Running             coredns                   2                   d9f07975099b4
	eed4fb65b1355       46a6bb3c77ce0       22 seconds ago       Running             kube-proxy                3                   864eee88f4f2d
	6a83f36d42569       655493523f607       27 seconds ago       Running             kube-scheduler            3                   d4b8c3d690241
	318fc22205625       e9c08e11b07f6       27 seconds ago       Running             kube-controller-manager   3                   0ab7110849dd3
	35b3d2d96970f       deb04688c4a35       27 seconds ago       Running             kube-apiserver            3                   02be3ab2f0196
	a18ea735ec1fa       fce326961ae2d       27 seconds ago       Running             etcd                      3                   4566f9c32f9b6
	4a01e9080876c       deb04688c4a35       33 seconds ago       Created             kube-apiserver            2                   22bb57467f244
	dedaef110fcee       fce326961ae2d       45 seconds ago       Exited              etcd                      2                   36c6f2ce6f7ac
	65f599eb0eebf       e9c08e11b07f6       47 seconds ago       Exited              kube-controller-manager   2                   c638c5348fb7a
	1a8a12bf42f57       655493523f607       47 seconds ago       Exited              kube-scheduler            2                   ec9da7ff44bd9
	0364f8ab712b8       46a6bb3c77ce0       52 seconds ago       Exited              kube-proxy                2                   8f69d29f79237
	900b5dd1be8ed       5185b96f0becf       About a minute ago   Exited              coredns                   1                   e5363a998cd87
	
	* 
	* ==> coredns [47ced30f2dd5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:51220 - 31635 "HINFO IN 8398048868105058340.2766040748343229918. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01080628s
	
	* 
	* ==> coredns [900b5dd1be8e] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:35355 - 33426 "HINFO IN 5964515539779486170.4574324880819497146. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.085467132s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-868256
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-868256
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b839c677c13f941c936975b72b386dd12a345761
	                    minikube.k8s.io/name=pause-868256
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_03T22_36_49_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Feb 2023 22:36:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-868256
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Feb 2023 22:38:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Feb 2023 22:38:00 +0000   Fri, 03 Feb 2023 22:36:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Feb 2023 22:38:00 +0000   Fri, 03 Feb 2023 22:36:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Feb 2023 22:38:00 +0000   Fri, 03 Feb 2023 22:36:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Feb 2023 22:38:00 +0000   Fri, 03 Feb 2023 22:36:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-868256
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4c0b538bb934883b9b745615631a0cd
	  System UUID:                d96f8b80-73b2-4930-815b-fb582dc6c346
	  Boot ID:                    df076b79-1073-4433-b2e0-bb3b5cc417dd
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-dd5vv                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     81s
	  kube-system                 etcd-pause-868256                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         93s
	  kube-system                 kube-apiserver-pause-868256             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-pause-868256    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-proxy-6q8r8                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-scheduler-pause-868256             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 80s                  kube-proxy       
	  Normal  Starting                 21s                  kube-proxy       
	  Normal  NodeHasSufficientPID     106s (x5 over 106s)  kubelet          Node pause-868256 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    106s (x5 over 106s)  kubelet          Node pause-868256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  106s (x5 over 106s)  kubelet          Node pause-868256 status is now: NodeHasSufficientMemory
	  Normal  Starting                 94s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  94s                  kubelet          Node pause-868256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s                  kubelet          Node pause-868256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s                  kubelet          Node pause-868256 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             94s                  kubelet          Node pause-868256 status is now: NodeNotReady
	  Normal  NodeReady                93s                  kubelet          Node pause-868256 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  93s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           81s                  node-controller  Node pause-868256 event: Registered Node pause-868256 in Controller
	  Normal  Starting                 28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)    kubelet          Node pause-868256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)    kubelet          Node pause-868256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)    kubelet          Node pause-868256 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                  node-controller  Node pause-868256 event: Registered Node pause-868256 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e 28 34 f1 31 b2 08 06
	[Feb 3 22:30] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e fa bc 13 11 9f 08 06
	[Feb 3 22:31] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 a4 4b 6c 89 08 06
	[Feb 3 22:33] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 19 20 d9 94 9b 08 06
	[  +0.321290] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 19 20 d9 94 9b 08 06
	[Feb 3 22:35] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 02 f1 6b d2 1d a5 08 06
	[  +0.597047] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a d1 3e c4 71 e1 08 06
	[Feb 3 22:36] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a 80 51 bb 28 22 08 06
	[Feb 3 22:37] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 70 85 71 50 cd 08 06
	[  +0.447318] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 70 85 71 50 cd 08 06
	[ +22.892103] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 17 95 87 4a 17 08 06
	[Feb 3 22:38] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 39 60 ce 8f 52 08 06
	[ +14.790896] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 02 fc e0 1f 1f 1f 08 06
	
	* 
	* ==> etcd [a18ea735ec1f] <==
	* {"level":"info","ts":"2023-02-03T22:37:57.072Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-03T22:37:57.072Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-03T22:37:57.073Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-03T22:37:57.073Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-03T22:37:57.079Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-03T22:37:57.079Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-03T22:37:57.079Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-03T22:37:57.079Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-02-03T22:37:57.079Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 5"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 5"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 5"}
	{"level":"info","ts":"2023-02-03T22:37:58.960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 5"}
	{"level":"info","ts":"2023-02-03T22:37:58.962Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-03T22:37:58.962Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-868256 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-03T22:37:58.962Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-03T22:37:58.962Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-03T22:37:58.962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-03T22:37:58.963Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-03T22:37:58.964Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"warn","ts":"2023-02-03T22:38:09.008Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.61668ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-868256\" ","response":"range_response_count:1 size:5460"}
	{"level":"info","ts":"2023-02-03T22:38:09.008Z","caller":"traceutil/trace.go:171","msg":"trace[1730616229] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-868256; range_end:; response_count:1; response_revision:434; }","duration":"120.785836ms","start":"2023-02-03T22:38:08.887Z","end":"2023-02-03T22:38:09.008Z","steps":["trace[1730616229] 'range keys from in-memory index tree'  (duration: 120.416427ms)"],"step_count":1}
	
	* 
	* ==> etcd [dedaef110fce] <==
	* {"level":"info","ts":"2023-02-03T22:37:39.760Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-03T22:37:39.760Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-02-03T22:37:39.760Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-02-03T22:37:39.760Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-03T22:37:39.760Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 3"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 3"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:40.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2023-02-03T22:37:40.950Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-868256 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-03T22:37:40.950Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-03T22:37:40.951Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-03T22:37:40.951Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-03T22:37:40.952Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-03T22:37:40.952Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-03T22:37:40.953Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2023-02-03T22:37:49.753Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-02-03T22:37:49.753Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-868256","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"info","ts":"2023-02-03T22:37:49.760Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2023-02-03T22:37:49.763Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-02-03T22:37:49.763Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-02-03T22:37:49.763Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-868256","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	* 
	* ==> kernel <==
	*  22:38:23 up  2:20,  0 users,  load average: 6.48, 5.34, 3.29
	Linux pause-868256 5.15.0-1027-gcp #34~20.04.1-Ubuntu SMP Mon Jan 9 18:40:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [35b3d2d96970] <==
	* I0203 22:38:00.597506       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0203 22:38:00.597692       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0203 22:38:00.597702       1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
	I0203 22:38:00.598032       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 22:38:00.598191       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0203 22:38:00.636745       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0203 22:38:00.649685       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0203 22:38:00.733438       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0203 22:38:00.735823       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0203 22:38:00.736596       1 cache.go:39] Caches are synced for autoregister controller
	I0203 22:38:00.736719       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0203 22:38:00.737558       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0203 22:38:00.737642       1 shared_informer.go:280] Caches are synced for configmaps
	I0203 22:38:00.737773       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0203 22:38:00.833416       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0203 22:38:00.833451       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0203 22:38:01.349910       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0203 22:38:01.605435       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0203 22:38:02.254494       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 22:38:02.267762       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 22:38:02.303272       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 22:38:02.339559       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0203 22:38:02.347364       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0203 22:38:13.407639       1 controller.go:615] quota admission added evaluator for: endpoints
	I0203 22:38:13.417865       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [4a01e9080876] <==
	* 
	* 
	* ==> kube-controller-manager [318fc2220562] <==
	* I0203 22:38:13.384395       1 shared_informer.go:280] Caches are synced for TTL after finished
	I0203 22:38:13.384478       1 shared_informer.go:280] Caches are synced for job
	I0203 22:38:13.384527       1 shared_informer.go:280] Caches are synced for ReplicationController
	I0203 22:38:13.384646       1 shared_informer.go:280] Caches are synced for GC
	I0203 22:38:13.385797       1 shared_informer.go:280] Caches are synced for ReplicaSet
	I0203 22:38:13.392056       1 shared_informer.go:280] Caches are synced for HPA
	I0203 22:38:13.395272       1 shared_informer.go:280] Caches are synced for node
	I0203 22:38:13.395283       1 shared_informer.go:280] Caches are synced for disruption
	I0203 22:38:13.395357       1 range_allocator.go:167] Sending events to api server.
	I0203 22:38:13.395394       1 range_allocator.go:171] Starting range CIDR allocator
	I0203 22:38:13.395399       1 shared_informer.go:273] Waiting for caches to sync for cidrallocator
	I0203 22:38:13.395409       1 shared_informer.go:280] Caches are synced for cidrallocator
	I0203 22:38:13.397523       1 shared_informer.go:280] Caches are synced for bootstrap_signer
	I0203 22:38:13.398975       1 shared_informer.go:280] Caches are synced for endpoint
	I0203 22:38:13.401280       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0203 22:38:13.403558       1 shared_informer.go:280] Caches are synced for TTL
	I0203 22:38:13.405731       1 shared_informer.go:280] Caches are synced for daemon sets
	I0203 22:38:13.409201       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0203 22:38:13.481937       1 shared_informer.go:280] Caches are synced for cronjob
	I0203 22:38:13.514605       1 shared_informer.go:280] Caches are synced for resource quota
	I0203 22:38:13.544905       1 shared_informer.go:280] Caches are synced for resource quota
	I0203 22:38:13.598433       1 shared_informer.go:280] Caches are synced for attach detach
	I0203 22:38:13.936046       1 shared_informer.go:280] Caches are synced for garbage collector
	I0203 22:38:13.966331       1 shared_informer.go:280] Caches are synced for garbage collector
	I0203 22:38:13.966366       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [65f599eb0eeb] <==
	* I0203 22:37:36.832518       1 serving.go:348] Generated self-signed cert in-memory
	I0203 22:37:37.438220       1 controllermanager.go:182] Version: v1.26.1
	I0203 22:37:37.438270       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 22:37:37.439973       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0203 22:37:37.440713       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0203 22:37:37.440840       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 22:37:37.441000       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-proxy [0364f8ab712b] <==
	* E0203 22:37:40.531428       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-868256": dial tcp 192.168.85.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.85.2:42262->192.168.85.2:8443: read: connection reset by peer
	E0203 22:37:41.597562       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-868256": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:43.808745       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-868256": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.159722       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-868256": dial tcp 192.168.85.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [eed4fb65b135] <==
	* I0203 22:38:01.749043       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0203 22:38:01.749147       1 server_others.go:109] "Detected node IP" address="192.168.85.2"
	I0203 22:38:01.749173       1 server_others.go:535] "Using iptables proxy"
	I0203 22:38:01.781102       1 server_others.go:176] "Using iptables Proxier"
	I0203 22:38:01.781161       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0203 22:38:01.781174       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0203 22:38:01.781197       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0203 22:38:01.781229       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0203 22:38:01.781627       1 server.go:655] "Version info" version="v1.26.1"
	I0203 22:38:01.781641       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 22:38:01.786662       1 config.go:317] "Starting service config controller"
	I0203 22:38:01.786698       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0203 22:38:01.786804       1 config.go:226] "Starting endpoint slice config controller"
	I0203 22:38:01.786821       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0203 22:38:01.787490       1 config.go:444] "Starting node config controller"
	I0203 22:38:01.787522       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0203 22:38:01.887788       1 shared_informer.go:280] Caches are synced for node config
	I0203 22:38:01.887827       1 shared_informer.go:280] Caches are synced for service config
	I0203 22:38:01.887861       1 shared_informer.go:280] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [1a8a12bf42f5] <==
	* W0203 22:37:48.418053       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.85.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.418102       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.85.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:48.536848       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.85.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.536886       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.85.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:48.559639       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.85.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.559684       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.85.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:48.571229       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.85.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.571271       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.85.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:48.651308       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.85.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.651358       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.85.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:48.759133       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.85.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:48.759175       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.85.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:49.570456       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.85.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:49.570507       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.85.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:49.596093       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.85.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:49.596144       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.85.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:49.733563       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.85.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:49.733606       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.85.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	W0203 22:37:49.746697       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.85.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	E0203 22:37:49.746772       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.85.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	I0203 22:37:49.753765       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0203 22:37:49.753860       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0203 22:37:49.753911       1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 22:37:49.754317       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0203 22:37:49.754357       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [6a83f36d4256] <==
	* I0203 22:37:57.539231       1 serving.go:348] Generated self-signed cert in-memory
	W0203 22:38:00.642481       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0203 22:38:00.642517       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0203 22:38:00.642529       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0203 22:38:00.642539       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 22:38:00.735367       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0203 22:38:00.735410       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 22:38:00.736959       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0203 22:38:00.737207       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 22:38:00.737296       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 22:38:00.737365       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0203 22:38:00.838117       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-03 22:36:24 UTC, end at Fri 2023-02-03 22:38:24 UTC. --
	Feb 03 22:37:56 pause-868256 kubelet[7090]: W0203 22:37:56.835486    7090 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 03 22:37:56 pause-868256 kubelet[7090]: E0203 22:37:56.835582    7090 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 03 22:37:57 pause-868256 kubelet[7090]: I0203 22:37:57.488927    7090 kubelet_node_status.go:70] "Attempting to register node" node="pause-868256"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.756726    7090 kubelet_node_status.go:108] "Node was previously registered" node="pause-868256"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.756844    7090 kubelet_node_status.go:73] "Successfully registered node" node="pause-868256"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.757970    7090 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.833730    7090 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.882541    7090 apiserver.go:52] "Watching apiserver"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.885281    7090 topology_manager.go:210] "Topology Admit Handler"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.885740    7090 topology_manager.go:210] "Topology Admit Handler"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.893472    7090 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934056    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aed13814-af10-4c1c-9548-20630079cd3c-config-volume\") pod \"coredns-787d4945fb-dd5vv\" (UID: \"aed13814-af10-4c1c-9548-20630079cd3c\") " pod="kube-system/coredns-787d4945fb-dd5vv"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934120    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a9c6e5f1-fd98-4bc1-aae7-b0485f877616-kube-proxy\") pod \"kube-proxy-6q8r8\" (UID: \"a9c6e5f1-fd98-4bc1-aae7-b0485f877616\") " pod="kube-system/kube-proxy-6q8r8"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934170    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd4tx\" (UniqueName: \"kubernetes.io/projected/a9c6e5f1-fd98-4bc1-aae7-b0485f877616-kube-api-access-xd4tx\") pod \"kube-proxy-6q8r8\" (UID: \"a9c6e5f1-fd98-4bc1-aae7-b0485f877616\") " pod="kube-system/kube-proxy-6q8r8"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934250    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txkvf\" (UniqueName: \"kubernetes.io/projected/aed13814-af10-4c1c-9548-20630079cd3c-kube-api-access-txkvf\") pod \"coredns-787d4945fb-dd5vv\" (UID: \"aed13814-af10-4c1c-9548-20630079cd3c\") " pod="kube-system/coredns-787d4945fb-dd5vv"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934286    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9c6e5f1-fd98-4bc1-aae7-b0485f877616-lib-modules\") pod \"kube-proxy-6q8r8\" (UID: \"a9c6e5f1-fd98-4bc1-aae7-b0485f877616\") " pod="kube-system/kube-proxy-6q8r8"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934317    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9c6e5f1-fd98-4bc1-aae7-b0485f877616-xtables-lock\") pod \"kube-proxy-6q8r8\" (UID: \"a9c6e5f1-fd98-4bc1-aae7-b0485f877616\") " pod="kube-system/kube-proxy-6q8r8"
	Feb 03 22:38:00 pause-868256 kubelet[7090]: I0203 22:38:00.934342    7090 reconciler.go:41] "Reconciler: start to sync state"
	Feb 03 22:38:01 pause-868256 kubelet[7090]: I0203 22:38:01.486360    7090 scope.go:115] "RemoveContainer" containerID="0364f8ab712b82038f0d44ed3b9a487c0a41355a9bf2c3871bc59cbe494bcd13"
	Feb 03 22:38:03 pause-868256 kubelet[7090]: I0203 22:38:03.332601    7090 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Feb 03 22:38:06 pause-868256 kubelet[7090]: I0203 22:38:06.669092    7090 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Feb 03 22:38:16 pause-868256 kubelet[7090]: I0203 22:38:16.302972    7090 topology_manager.go:210] "Topology Admit Handler"
	Feb 03 22:38:16 pause-868256 kubelet[7090]: I0203 22:38:16.447036    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm6bb\" (UniqueName: \"kubernetes.io/projected/48da2fca-7198-449d-bebd-84e7ce3d61e0-kube-api-access-pm6bb\") pod \"storage-provisioner\" (UID: \"48da2fca-7198-449d-bebd-84e7ce3d61e0\") " pod="kube-system/storage-provisioner"
	Feb 03 22:38:16 pause-868256 kubelet[7090]: I0203 22:38:16.447113    7090 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/48da2fca-7198-449d-bebd-84e7ce3d61e0-tmp\") pod \"storage-provisioner\" (UID: \"48da2fca-7198-449d-bebd-84e7ce3d61e0\") " pod="kube-system/storage-provisioner"
	Feb 03 22:38:17 pause-868256 kubelet[7090]: I0203 22:38:17.454077    7090 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.454027366 pod.CreationTimestamp="2023-02-03 22:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-03 22:38:17.453840185 +0000 UTC m=+21.696995514" watchObservedRunningTime="2023-02-03 22:38:17.454027366 +0000 UTC m=+21.697182705"
	
	* 
	* ==> storage-provisioner [a511701d78f4] <==
	* I0203 22:38:16.936999       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0203 22:38:16.946697       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0203 22:38:16.946752       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0203 22:38:16.955065       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0203 22:38:16.955238       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-868256_4a0635b1-4204-4ec5-8fe3-0ffa67459c40!
	I0203 22:38:16.955692       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"02e3f377-c0af-4b3c-adb7-b97e0409d467", APIVersion:"v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-868256_4a0635b1-4204-4ec5-8fe3-0ffa67459c40 became leader
	I0203 22:38:17.055517       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-868256_4a0635b1-4204-4ec5-8fe3-0ffa67459c40!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-868256 -n pause-868256

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
helpers_test.go:261: (dbg) Run:  kubectl --context pause-868256 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (78.44s)

                                                
                                    

Test pass (281/302)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 5.84
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.26.1/json-events 4.56
11 TestDownloadOnly/v1.26.1/preload-exists 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.1
16 TestDownloadOnly/DeleteAll 0.69
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.41
19 TestBinaryMirror 1.26
20 TestOffline 67.33
22 TestAddons/Setup 104.79
24 TestAddons/parallel/Registry 16.7
25 TestAddons/parallel/Ingress 26.38
26 TestAddons/parallel/MetricsServer 5.72
27 TestAddons/parallel/HelmTiller 10.82
29 TestAddons/parallel/CSI 47.42
30 TestAddons/parallel/Headlamp 13.19
31 TestAddons/parallel/CloudSpanner 5.6
34 TestAddons/serial/GCPAuth/Namespaces 0.16
35 TestAddons/StoppedEnableDisable 11.43
36 TestCertOptions 32.48
37 TestCertExpiration 250.19
38 TestDockerFlags 36.36
39 TestForceSystemdFlag 39.39
40 TestForceSystemdEnv 45.24
41 TestKVMDriverInstallOrUpdate 1.98
45 TestErrorSpam/setup 27.91
46 TestErrorSpam/start 1.31
47 TestErrorSpam/status 1.63
48 TestErrorSpam/pause 1.81
49 TestErrorSpam/unpause 1.79
50 TestErrorSpam/stop 2.69
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 43.9
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 43.85
57 TestFunctional/serial/KubeContext 0.05
58 TestFunctional/serial/KubectlGetPods 0.06
61 TestFunctional/serial/CacheCmd/cache/add_remote 2.99
62 TestFunctional/serial/CacheCmd/cache/add_local 1.07
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
64 TestFunctional/serial/CacheCmd/cache/list 0.08
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.52
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.38
67 TestFunctional/serial/CacheCmd/cache/delete 0.16
68 TestFunctional/serial/MinikubeKubectlCmd 0.14
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
70 TestFunctional/serial/ExtraConfig 45.45
71 TestFunctional/serial/ComponentHealth 0.07
72 TestFunctional/serial/LogsCmd 1.37
73 TestFunctional/serial/LogsFileCmd 1.41
75 TestFunctional/parallel/ConfigCmd 0.54
76 TestFunctional/parallel/DashboardCmd 9.03
77 TestFunctional/parallel/DryRun 1.25
78 TestFunctional/parallel/InternationalLanguage 0.32
79 TestFunctional/parallel/StatusCmd 1.75
82 TestFunctional/parallel/ServiceCmd 14.59
83 TestFunctional/parallel/ServiceCmdConnect 10.21
84 TestFunctional/parallel/AddonsCmd 0.45
85 TestFunctional/parallel/PersistentVolumeClaim 30.53
87 TestFunctional/parallel/SSHCmd 1.48
88 TestFunctional/parallel/CpCmd 2.54
89 TestFunctional/parallel/MySQL 25.31
90 TestFunctional/parallel/FileSync 0.6
91 TestFunctional/parallel/CertSync 3.92
95 TestFunctional/parallel/NodeLabels 0.1
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
99 TestFunctional/parallel/License 0.17
100 TestFunctional/parallel/Version/short 0.09
101 TestFunctional/parallel/Version/components 1.23
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.43
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.39
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.43
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.46
106 TestFunctional/parallel/ImageCommands/ImageBuild 3.84
107 TestFunctional/parallel/ImageCommands/Setup 1.04
108 TestFunctional/parallel/DockerEnv/bash 2.31
109 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.44
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.35
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.33
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.35
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.39
117 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.1
118 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.4
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.57
126 TestFunctional/parallel/ImageCommands/ImageRemove 0.78
127 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.68
128 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.17
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.81
130 TestFunctional/parallel/MountCmd/any-port 10.25
131 TestFunctional/parallel/ProfileCmd/profile_list 0.65
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.68
133 TestFunctional/parallel/MountCmd/specific-port 3.07
134 TestFunctional/delete_addon-resizer_images 0.18
135 TestFunctional/delete_my-image_image 0.07
136 TestFunctional/delete_minikube_cached_images 0.06
140 TestImageBuild/serial/NormalBuild 1.05
141 TestImageBuild/serial/BuildWithBuildArg 1.17
142 TestImageBuild/serial/BuildWithDockerIgnore 0.51
143 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.42
146 TestIngressAddonLegacy/StartLegacyK8sCluster 56.75
148 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.35
149 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.49
150 TestIngressAddonLegacy/serial/ValidateIngressAddons 34.1
153 TestJSONOutput/start/Command 46.05
154 TestJSONOutput/start/Audit 0
156 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/pause/Command 0.72
160 TestJSONOutput/pause/Audit 0
162 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/unpause/Command 0.71
166 TestJSONOutput/unpause/Audit 0
168 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/stop/Command 5.97
172 TestJSONOutput/stop/Audit 0
174 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
176 TestErrorJSONOutput 0.53
178 TestKicCustomNetwork/create_custom_network 31.2
179 TestKicCustomNetwork/use_default_bridge_network 30.49
180 TestKicExistingNetwork 31.27
181 TestKicCustomSubnet 30.44
182 TestKicStaticIP 31.36
183 TestMainNoArgs 0.08
184 TestMinikubeProfile 65.88
187 TestMountStart/serial/StartWithMountFirst 7.83
188 TestMountStart/serial/VerifyMountFirst 0.5
189 TestMountStart/serial/StartWithMountSecond 7.69
190 TestMountStart/serial/VerifyMountSecond 0.5
191 TestMountStart/serial/DeleteFirst 2.19
192 TestMountStart/serial/VerifyMountPostDelete 0.5
193 TestMountStart/serial/Stop 1.44
194 TestMountStart/serial/RestartStopped 8.17
195 TestMountStart/serial/VerifyMountPostStop 0.5
198 TestMultiNode/serial/FreshStart2Nodes 63.47
199 TestMultiNode/serial/DeployApp2Nodes 6.72
200 TestMultiNode/serial/PingHostFrom2Pods 1.03
201 TestMultiNode/serial/AddNode 18.81
202 TestMultiNode/serial/ProfileList 0.55
203 TestMultiNode/serial/CopyFile 18.12
204 TestMultiNode/serial/StopNode 3.31
205 TestMultiNode/serial/StartAfterStop 13.61
206 TestMultiNode/serial/RestartKeepsNodes 120.82
207 TestMultiNode/serial/DeleteNode 6.46
208 TestMultiNode/serial/StopMultiNode 22.26
209 TestMultiNode/serial/RestartMultiNode 62.81
210 TestMultiNode/serial/ValidateNameConflict 31.61
215 TestPreload 121.57
217 TestScheduledStopUnix 103.08
218 TestSkaffold 58.12
220 TestInsufficientStorage 13.76
221 TestRunningBinaryUpgrade 94.77
223 TestKubernetesUpgrade 103.58
224 TestMissingContainerUpgrade 113.49
226 TestStoppedBinaryUpgrade/Setup 0.43
227 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
228 TestNoKubernetes/serial/StartWithK8s 49.58
229 TestStoppedBinaryUpgrade/Upgrade 87.89
230 TestNoKubernetes/serial/StartWithStopK8s 19.21
231 TestNoKubernetes/serial/Start 10.26
232 TestNoKubernetes/serial/VerifyK8sNotRunning 0.57
233 TestNoKubernetes/serial/ProfileList 13.18
234 TestStoppedBinaryUpgrade/MinikubeLogs 1.87
235 TestNoKubernetes/serial/Stop 1.63
236 TestNoKubernetes/serial/StartNoArgs 8.4
237 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.67
257 TestPause/serial/Start 51.27
259 TestNetworkPlugins/group/auto/Start 47.74
260 TestNetworkPlugins/group/kindnet/Start 51.89
261 TestNetworkPlugins/group/auto/KubeletFlags 0.6
262 TestNetworkPlugins/group/auto/NetCatPod 9.28
263 TestNetworkPlugins/group/calico/Start 74.15
264 TestNetworkPlugins/group/auto/DNS 0.2
265 TestNetworkPlugins/group/auto/Localhost 0.17
266 TestNetworkPlugins/group/auto/HairPin 0.17
267 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
268 TestNetworkPlugins/group/kindnet/KubeletFlags 0.57
269 TestNetworkPlugins/group/custom-flannel/Start 52.4
270 TestNetworkPlugins/group/kindnet/NetCatPod 12.25
271 TestNetworkPlugins/group/kindnet/DNS 0.21
272 TestNetworkPlugins/group/kindnet/Localhost 0.25
273 TestNetworkPlugins/group/kindnet/HairPin 0.18
274 TestNetworkPlugins/group/calico/ControllerPod 5.02
275 TestNetworkPlugins/group/false/Start 46.21
276 TestNetworkPlugins/group/calico/KubeletFlags 0.74
277 TestNetworkPlugins/group/calico/NetCatPod 12.31
278 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.6
279 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.23
280 TestNetworkPlugins/group/calico/DNS 0.18
281 TestNetworkPlugins/group/calico/Localhost 0.15
282 TestNetworkPlugins/group/calico/HairPin 0.15
283 TestNetworkPlugins/group/custom-flannel/DNS 0.17
284 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
285 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
286 TestNetworkPlugins/group/false/KubeletFlags 0.73
287 TestNetworkPlugins/group/false/NetCatPod 11.23
288 TestNetworkPlugins/group/enable-default-cni/Start 52.62
289 TestNetworkPlugins/group/flannel/Start 57.66
290 TestNetworkPlugins/group/false/DNS 0.22
291 TestNetworkPlugins/group/false/Localhost 0.22
292 TestNetworkPlugins/group/false/HairPin 0.16
293 TestNetworkPlugins/group/bridge/Start 60.15
294 TestNetworkPlugins/group/kubenet/Start 47.96
295 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.67
296 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.36
297 TestNetworkPlugins/group/flannel/ControllerPod 5.02
298 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
299 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
300 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
301 TestNetworkPlugins/group/flannel/KubeletFlags 0.59
302 TestNetworkPlugins/group/flannel/NetCatPod 10.27
303 TestNetworkPlugins/group/flannel/DNS 0.22
304 TestNetworkPlugins/group/bridge/KubeletFlags 0.78
305 TestNetworkPlugins/group/flannel/Localhost 0.18
306 TestNetworkPlugins/group/flannel/HairPin 0.18
307 TestNetworkPlugins/group/bridge/NetCatPod 11.26
308 TestNetworkPlugins/group/kubenet/KubeletFlags 0.62
309 TestNetworkPlugins/group/kubenet/NetCatPod 9.28
310 TestNetworkPlugins/group/bridge/DNS 0.23
311 TestNetworkPlugins/group/bridge/Localhost 0.18
312 TestNetworkPlugins/group/bridge/HairPin 0.19
314 TestStartStop/group/old-k8s-version/serial/FirstStart 133.52
315 TestNetworkPlugins/group/kubenet/DNS 0.19
316 TestNetworkPlugins/group/kubenet/Localhost 0.19
317 TestNetworkPlugins/group/kubenet/HairPin 0.19
319 TestStartStop/group/no-preload/serial/FirstStart 56.52
321 TestStartStop/group/embed-certs/serial/FirstStart 54.48
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.6
324 TestStartStop/group/no-preload/serial/DeployApp 7.37
325 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
326 TestStartStop/group/no-preload/serial/Stop 11.16
327 TestStartStop/group/embed-certs/serial/DeployApp 9.4
328 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
329 TestStartStop/group/embed-certs/serial/Stop 11.22
330 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.49
331 TestStartStop/group/no-preload/serial/SecondStart 334.55
332 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
333 TestStartStop/group/embed-certs/serial/SecondStart 565.59
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.37
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.87
336 TestStartStop/group/old-k8s-version/serial/DeployApp 7.45
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.12
338 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.69
339 TestStartStop/group/old-k8s-version/serial/Stop 11.16
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.34
341 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 313.81
342 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.38
343 TestStartStop/group/old-k8s-version/serial/SecondStart 36.22
344 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 19.01
345 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
346 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.56
347 TestStartStop/group/old-k8s-version/serial/Pause 4.13
349 TestStartStop/group/newest-cni/serial/FirstStart 42.5
350 TestStartStop/group/newest-cni/serial/DeployApp 0
351 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.34
352 TestStartStop/group/newest-cni/serial/Stop 11.11
353 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.33
354 TestStartStop/group/newest-cni/serial/SecondStart 29.31
355 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.57
358 TestStartStop/group/newest-cni/serial/Pause 4.08
359 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.05
360 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
361 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.58
362 TestStartStop/group/no-preload/serial/Pause 4
363 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.02
364 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
365 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.55
366 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.97
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
369 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.55
370 TestStartStop/group/embed-certs/serial/Pause 3.84
x
+
TestDownloadOnly/v1.16.0/json-events (5.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-556054 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-556054 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.844414732s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (5.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-556054
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-556054: exit status 85 (92.742121ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-556054 | jenkins | v1.29.0 | 03 Feb 23 22:08 UTC |          |
	|         | -p download-only-556054        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/03 22:08:12
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 22:08:12.917024  650079 out.go:296] Setting OutFile to fd 1 ...
	I0203 22:08:12.917154  650079 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:08:12.917169  650079 out.go:309] Setting ErrFile to fd 2...
	I0203 22:08:12.917173  650079 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:08:12.917288  650079 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15770-643340/.minikube/bin
	W0203 22:08:12.917411  650079 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15770-643340/.minikube/config/config.json: open /home/jenkins/minikube-integration/15770-643340/.minikube/config/config.json: no such file or directory
	I0203 22:08:12.918050  650079 out.go:303] Setting JSON to true
	I0203 22:08:12.918978  650079 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6643,"bootTime":1675455450,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 22:08:12.919048  650079 start.go:135] virtualization: kvm guest
	I0203 22:08:12.922486  650079 out.go:97] [download-only-556054] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	W0203 22:08:12.922643  650079 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15770-643340/.minikube/cache/preloaded-tarball: no such file or directory
	I0203 22:08:12.922651  650079 notify.go:220] Checking for updates...
	I0203 22:08:12.924477  650079 out.go:169] MINIKUBE_LOCATION=15770
	I0203 22:08:12.926374  650079 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 22:08:12.928411  650079 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	I0203 22:08:12.930405  650079 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	I0203 22:08:12.932383  650079 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0203 22:08:12.935765  650079 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0203 22:08:12.936042  650079 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 22:08:13.009582  650079 docker.go:141] docker version: linux-23.0.0:Docker Engine - Community
	I0203 22:08:13.009711  650079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 22:08:13.133289  650079 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:41 SystemTime:2023-02-03 22:08:13.124583556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:31aa4358a36870b21a992d3ad2bef29e1d693bec Expected:31aa4358a36870b21a992d3ad2bef29e1d693bec} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 22:08:13.133400  650079 docker.go:282] overlay module found
	I0203 22:08:13.135655  650079 out.go:97] Using the docker driver based on user configuration
	I0203 22:08:13.135685  650079 start.go:296] selected driver: docker
	I0203 22:08:13.135707  650079 start.go:857] validating driver "docker" against <nil>
	I0203 22:08:13.135793  650079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 22:08:13.259902  650079 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:41 SystemTime:2023-02-03 22:08:13.251293684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:31aa4358a36870b21a992d3ad2bef29e1d693bec Expected:31aa4358a36870b21a992d3ad2bef29e1d693bec} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 22:08:13.260036  650079 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0203 22:08:13.260587  650079 start_flags.go:386] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I0203 22:08:13.260754  650079 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0203 22:08:13.263607  650079 out.go:169] Using Docker driver with root privileges
	I0203 22:08:13.265413  650079 cni.go:84] Creating CNI manager for ""
	I0203 22:08:13.265450  650079 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0203 22:08:13.265460  650079 start_flags.go:319] config:
	{Name:download-only-556054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-556054 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 22:08:13.268015  650079 out.go:97] Starting control plane node download-only-556054 in cluster download-only-556054
	I0203 22:08:13.268075  650079 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 22:08:13.269933  650079 out.go:97] Pulling base image ...
	I0203 22:08:13.269974  650079 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0203 22:08:13.270095  650079 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 22:08:13.313887  650079 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0203 22:08:13.313920  650079 cache.go:57] Caching tarball of preloaded images
	I0203 22:08:13.314111  650079 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0203 22:08:13.316882  650079 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0203 22:08:13.316923  650079 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0203 22:08:13.337576  650079 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 to local cache
	I0203 22:08:13.337750  650079 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local cache directory
	I0203 22:08:13.337843  650079 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 to local cache
	I0203 22:08:13.339253  650079 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/15770-643340/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0203 22:08:17.425305  650079 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 as a tarball
	I0203 22:08:17.559369  650079 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0203 22:08:17.559462  650079 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15770-643340/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-556054"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (4.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-556054 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-556054 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.559346779s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (4.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-556054
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-556054: exit status 85 (99.568227ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-556054 | jenkins | v1.29.0 | 03 Feb 23 22:08 UTC |          |
	|         | -p download-only-556054        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-556054 | jenkins | v1.29.0 | 03 Feb 23 22:08 UTC |          |
	|         | -p download-only-556054        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/03 22:08:18
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 22:08:18.855216  650317 out.go:296] Setting OutFile to fd 1 ...
	I0203 22:08:18.855354  650317 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:08:18.855365  650317 out.go:309] Setting ErrFile to fd 2...
	I0203 22:08:18.855372  650317 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:08:18.855498  650317 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15770-643340/.minikube/bin
	W0203 22:08:18.855653  650317 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15770-643340/.minikube/config/config.json: open /home/jenkins/minikube-integration/15770-643340/.minikube/config/config.json: no such file or directory
	I0203 22:08:18.856099  650317 out.go:303] Setting JSON to true
	I0203 22:08:18.856982  650317 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6649,"bootTime":1675455450,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 22:08:18.857067  650317 start.go:135] virtualization: kvm guest
	I0203 22:08:18.860949  650317 out.go:97] [download-only-556054] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 22:08:18.862815  650317 out.go:169] MINIKUBE_LOCATION=15770
	I0203 22:08:18.861169  650317 notify.go:220] Checking for updates...
	I0203 22:08:18.866694  650317 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 22:08:18.868369  650317 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	I0203 22:08:18.870039  650317 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	I0203 22:08:18.871866  650317 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-556054"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.69s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.69s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-556054
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.41s)

                                                
                                    
x
+
TestBinaryMirror (1.26s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-261583 --alsologtostderr --binary-mirror http://127.0.0.1:39791 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-261583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-261583
--- PASS: TestBinaryMirror (1.26s)

                                                
                                    
x
+
TestOffline (67.33s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-787792 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-787792 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m3.972959282s)
helpers_test.go:175: Cleaning up "offline-docker-787792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-787792

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-787792: (3.35890383s)
--- PASS: TestOffline (67.33s)

                                                
                                    
x
+
TestAddons/Setup (104.79s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-172406 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-172406 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m44.790989891s)
--- PASS: TestAddons/Setup (104.79s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 10.205642ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:344: "registry-6s426" [496e4321-ad18-40d2-b8c7-333306051a91] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010011878s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mdrkh" [7db59f02-c97d-4b62-902f-edf332677475] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009116286s
addons_test.go:305: (dbg) Run:  kubectl --context addons-172406 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-172406 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:310: (dbg) Done: kubectl --context addons-172406 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.317496857s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-172406 ip
2023/02/03 22:10:28 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-172406 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.70s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (26.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-172406 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-172406 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Done: kubectl --context addons-172406 replace --force -f testdata/nginx-ingress-v1.yaml: (1.426912105s)
addons_test.go:210: (dbg) Run:  kubectl --context addons-172406 replace --force -f testdata/nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [632136a0-1814-4b12-bb66-93a5fcc77c23] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:344: "nginx" [632136a0-1814-4b12-bb66-93a5fcc77c23] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:344: "nginx" [632136a0-1814-4b12-bb66-93a5fcc77c23] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.055654923s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-172406 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-172406 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-172406 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-172406 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p addons-172406 addons disable ingress-dns --alsologtostderr -v=1: (1.823127177s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-172406 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-172406 addons disable ingress --alsologtostderr -v=1: (7.6154928s)
--- PASS: TestAddons/parallel/Ingress (26.38s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 10.229457ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:344: "metrics-server-5f8fcc9bb7-6gnmv" [b63c5ab5-9d0b-43e9-94ea-7ebffffef63d] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009915044s
addons_test.go:380: (dbg) Run:  kubectl --context addons-172406 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-172406 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.72s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.82s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 10.149697ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:344: "tiller-deploy-54cb789455-9f8j2" [a31c72e2-41ed-45bb-9144-49596fb292e3] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.010014224s
addons_test.go:438: (dbg) Run:  kubectl --context addons-172406 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:438: (dbg) Done: kubectl --context addons-172406 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.233249604s)
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-172406 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 12.307943ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-172406 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172406 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-172406 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d202fc38-6b42-441b-a2f1-5d9892337c26] Pending
helpers_test.go:344: "task-pv-pod" [d202fc38-6b42-441b-a2f1-5d9892337c26] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod" [d202fc38-6b42-441b-a2f1-5d9892337c26] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.007036225s
addons_test.go:549: (dbg) Run:  kubectl --context addons-172406 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-172406 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-172406 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-172406 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-172406 delete pod task-pv-pod: (1.318683519s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-172406 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-172406 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-172406 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-172406 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f8ce7e9d-2c6f-48dc-9708-263a6ad7ab5d] Pending
helpers_test.go:344: "task-pv-pod-restore" [f8ce7e9d-2c6f-48dc-9708-263a6ad7ab5d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod-restore" [f8ce7e9d-2c6f-48dc-9708-263a6ad7ab5d] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.007478107s
addons_test.go:591: (dbg) Run:  kubectl --context addons-172406 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-172406 delete pod task-pv-pod-restore: (1.195469512s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-172406 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-172406 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-172406 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-172406 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.080105167s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-172406 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.42s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-172406 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-172406 --alsologtostderr -v=1: (2.182944654s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-48q7x" [bcade60f-7eb7-4ac1-a2a5-70367e2a1a44] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:344: "headlamp-5759877c79-48q7x" [bcade60f-7eb7-4ac1-a2a5-70367e2a1a44] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:344: "headlamp-5759877c79-48q7x" [bcade60f-7eb7-4ac1-a2a5-70367e2a1a44] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-48q7x" [bcade60f-7eb7-4ac1-a2a5-70367e2a1a44] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.010589052s
--- PASS: TestAddons/parallel/Headlamp (13.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-ddf7c59b4-5x5w8" [3e049f73-88f0-4c95-904f-40f4ba285653] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.055066908s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-172406
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-172406 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-172406 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-172406
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-172406: (11.123386904s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-172406
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-172406
--- PASS: TestAddons/StoppedEnableDisable (11.43s)

                                                
                                    
x
+
TestCertOptions (32.48s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-145838 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-145838 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (28.110026992s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-145838 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-145838 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-145838 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-145838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-145838
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-145838: (3.108137105s)
--- PASS: TestCertOptions (32.48s)

                                                
                                    
x
+
TestCertExpiration (250.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-012867 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-012867 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (33.528402277s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-012867 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-012867 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (33.279513017s)
helpers_test.go:175: Cleaning up "cert-expiration-012867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-012867
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-012867: (3.379465488s)
--- PASS: TestCertExpiration (250.19s)

                                                
                                    
x
+
TestDockerFlags (36.36s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-636731 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0203 22:37:05.270168  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-636731 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (32.165524394s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-636731 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-636731 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-636731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-636731

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-636731: (2.949341384s)
--- PASS: TestDockerFlags (36.36s)

                                                
                                    
x
+
TestForceSystemdFlag (39.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-804939 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-804939 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (32.582482611s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-804939 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-804939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-804939
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-804939: (5.927490746s)
--- PASS: TestForceSystemdFlag (39.39s)

                                                
                                    
x
+
TestForceSystemdEnv (45.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-432494 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-432494 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.958867455s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-432494 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-432494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-432494
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-432494: (3.499284357s)
--- PASS: TestForceSystemdEnv (45.24s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.98s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.98s)

                                                
                                    
x
+
TestErrorSpam/setup (27.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-547039 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-547039 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-547039 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-547039 --driver=docker  --container-runtime=docker: (27.90647303s)
--- PASS: TestErrorSpam/setup (27.91s)

                                                
                                    
x
+
TestErrorSpam/start (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 start --dry-run
--- PASS: TestErrorSpam/start (1.31s)

                                                
                                    
x
+
TestErrorSpam/status (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 status
--- PASS: TestErrorSpam/status (1.63s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (2.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 stop: (2.247035549s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547039 --log_dir /tmp/nospam-547039 stop
--- PASS: TestErrorSpam/stop (2.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15770-643340/.minikube/files/etc/test/nested/copy/650065/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.9s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-652223 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-652223 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (43.897847058s)
--- PASS: TestFunctional/serial/StartWithProxy (43.90s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.85s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-652223 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-652223 --alsologtostderr -v=8: (43.851863711s)
functional_test.go:656: soft start took 43.852674421s for "functional-652223" cluster.
--- PASS: TestFunctional/serial/SoftStart (43.85s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-652223 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-652223 cache add k8s.gcr.io/pause:latest: (1.034191335s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-652223 /tmp/TestFunctionalserialCacheCmdcacheadd_local3560230408/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 cache add minikube-local-cache-test:functional-652223
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 cache delete minikube-local-cache-test:functional-652223
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-652223
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652223 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (514.960807ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 cache reload
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 kubectl -- --context functional-652223 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-652223 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-652223 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-652223 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.450126828s)
functional_test.go:754: restart took 45.450266159s for "functional-652223" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-652223 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-652223 logs: (1.365317746s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 logs --file /tmp/TestFunctionalserialLogsFileCmd418041908/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-652223 logs --file /tmp/TestFunctionalserialLogsFileCmd418041908/001/logs.txt: (1.410032067s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652223 config get cpus: exit status 14 (84.78392ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652223 config get cpus: exit status 14 (82.913998ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-652223 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-652223 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 713380: os: process already finished
E0203 22:15:12.659451  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
E0203 22:15:12.665962  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
E0203 22:15:12.676163  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
E0203 22:15:12.696533  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
E0203 22:15:12.736899  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DashboardCmd (9.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-652223 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-652223 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (625.225985ms)

                                                
                                                
-- stdout --
	* [functional-652223] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15770
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 22:14:57.863538  709535 out.go:296] Setting OutFile to fd 1 ...
	I0203 22:14:57.863635  709535 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:14:57.863644  709535 out.go:309] Setting ErrFile to fd 2...
	I0203 22:14:57.863649  709535 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:14:57.863758  709535 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15770-643340/.minikube/bin
	I0203 22:14:57.864347  709535 out.go:303] Setting JSON to false
	I0203 22:14:57.865702  709535 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7048,"bootTime":1675455450,"procs":460,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 22:14:57.865799  709535 start.go:135] virtualization: kvm guest
	I0203 22:14:57.869455  709535 out.go:177] * [functional-652223] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0203 22:14:57.872012  709535 notify.go:220] Checking for updates...
	I0203 22:14:57.874093  709535 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 22:14:57.876530  709535 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 22:14:57.879036  709535 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	I0203 22:14:57.881599  709535 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	I0203 22:14:57.884077  709535 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 22:14:57.886249  709535 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 22:14:57.888788  709535 config.go:180] Loaded profile config "functional-652223": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:14:57.889248  709535 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 22:14:57.979122  709535 docker.go:141] docker version: linux-23.0.0:Docker Engine - Community
	I0203 22:14:57.979233  709535 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 22:14:58.167326  709535 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:38 SystemTime:2023-02-03 22:14:58.158891848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:31aa4358a36870b21a992d3ad2bef29e1d693bec Expected:31aa4358a36870b21a992d3ad2bef29e1d693bec} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 22:14:58.167426  709535 docker.go:282] overlay module found
	I0203 22:14:58.187557  709535 out.go:177] * Using the docker driver based on existing profile
	I0203 22:14:58.253532  709535 start.go:296] selected driver: docker
	I0203 22:14:58.253571  709535 start.go:857] validating driver "docker" against &{Name:functional-652223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-652223 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 22:14:58.253719  709535 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 22:14:58.310795  709535 out.go:177] 
	W0203 22:14:58.327230  709535 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0203 22:14:58.387846  709535 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-652223 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-652223 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-652223 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (324.104226ms)

                                                
                                                
-- stdout --
	* [functional-652223] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15770
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 22:15:00.879499  710801 out.go:296] Setting OutFile to fd 1 ...
	I0203 22:15:00.879653  710801 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:15:00.879671  710801 out.go:309] Setting ErrFile to fd 2...
	I0203 22:15:00.879678  710801 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:15:00.879962  710801 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15770-643340/.minikube/bin
	I0203 22:15:00.880761  710801 out.go:303] Setting JSON to false
	I0203 22:15:00.882372  710801 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7051,"bootTime":1675455450,"procs":473,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0203 22:15:00.882471  710801 start.go:135] virtualization: kvm guest
	I0203 22:15:00.885588  710801 out.go:177] * [functional-652223] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	I0203 22:15:00.887446  710801 notify.go:220] Checking for updates...
	I0203 22:15:00.887465  710801 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 22:15:00.889763  710801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 22:15:00.891860  710801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	I0203 22:15:00.893672  710801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	I0203 22:15:00.895660  710801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0203 22:15:00.897660  710801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 22:15:00.899756  710801 config.go:180] Loaded profile config "functional-652223": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:15:00.900235  710801 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 22:15:00.978770  710801 docker.go:141] docker version: linux-23.0.0:Docker Engine - Community
	I0203 22:15:00.978976  710801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 22:15:01.104169  710801 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:38 SystemTime:2023-02-03 22:15:01.095263363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:31aa4358a36870b21a992d3ad2bef29e1d693bec Expected:31aa4358a36870b21a992d3ad2bef29e1d693bec} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 22:15:01.104311  710801 docker.go:282] overlay module found
	I0203 22:15:01.107403  710801 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0203 22:15:01.109245  710801 start.go:296] selected driver: docker
	I0203 22:15:01.109279  710801 start.go:857] validating driver "docker" against &{Name:functional-652223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-652223 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 22:15:01.109421  710801 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 22:15:01.112578  710801 out.go:177] 
	W0203 22:15:01.114817  710801 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0203 22:15:01.116695  710801 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 status
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (14.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-652223 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1439: (dbg) Run:  kubectl --context functional-652223 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-r8ddr" [3722e317-4869-44a4-8289-d7acce500a0c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:344: "hello-node-6fddd6858d-r8ddr" [3722e317-4869-44a4-8289-d7acce500a0c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 11.015263214s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 service list
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1476: found endpoint: https://192.168.49.2:30892
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 service hello-node --url
functional_test.go:1511: found endpoint for hello-node: http://192.168.49.2:30892
--- PASS: TestFunctional/parallel/ServiceCmd (14.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-652223 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-652223 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-svnf4" [bf8a8f4a-9b95-499b-bb8f-359f5e4debe0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:344: "hello-node-connect-5cf7cc858f-svnf4" [bf8a8f4a-9b95-499b-bb8f-359f5e4debe0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.02688016s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.49.2:31411
functional_test.go:1605: http://192.168.49.2:31411: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-svnf4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31411
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6697ac18-2920-4a6d-a507-5641c92a0420] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009393463s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-652223 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-652223 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-652223 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-652223 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dcf144d1-b6bc-49d4-a9d6-dc6ebebdbf64] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [dcf144d1-b6bc-49d4-a9d6-dc6ebebdbf64] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [dcf144d1-b6bc-49d4-a9d6-dc6ebebdbf64] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.008053491s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-652223 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-652223 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-652223 delete -f testdata/storage-provisioner/pod.yaml: (1.414924408s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-652223 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [342e2c15-e7a4-493c-b3d6-b1478f54bcf2] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [342e2c15-e7a4-493c-b3d6-b1478f54bcf2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [342e2c15-e7a4-493c-b3d6-b1478f54bcf2] Running
E0203 22:15:12.978230  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.008804775s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-652223 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.53s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh -n functional-652223 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 cp functional-652223:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4148015064/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh -n functional-652223 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-652223 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-cfdr5" [0ea64a2a-0d23-40ac-a891-2121ec5012c9] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-cfdr5" [0ea64a2a-0d23-40ac-a891-2121ec5012c9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-cfdr5" [0ea64a2a-0d23-40ac-a891-2121ec5012c9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.012289401s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-652223 exec mysql-888f84dd9-cfdr5 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-652223 exec mysql-888f84dd9-cfdr5 -- mysql -ppassword -e "show databases;": exit status 1 (194.476094ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-652223 exec mysql-888f84dd9-cfdr5 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-652223 exec mysql-888f84dd9-cfdr5 -- mysql -ppassword -e "show databases;": exit status 1 (199.908415ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-652223 exec mysql-888f84dd9-cfdr5 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-652223 exec mysql-888f84dd9-cfdr5 -- mysql -ppassword -e "show databases;": exit status 1 (146.600816ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-652223 exec mysql-888f84dd9-cfdr5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/650065/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "sudo cat /etc/test/nested/copy/650065/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/650065.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "sudo cat /etc/ssl/certs/650065.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/650065.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "sudo cat /usr/share/ca-certificates/650065.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/6500652.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "sudo cat /etc/ssl/certs/6500652.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/6500652.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "sudo cat /usr/share/ca-certificates/6500652.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-652223 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652223 ssh "sudo systemctl is-active crio": exit status 1 (602.48105ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Done: out/minikube-linux-amd64 -p functional-652223 version -o=json --components: (1.23197337s)
--- PASS: TestFunctional/parallel/Version/components (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-652223 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-652223
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-652223
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-652223 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-652223 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-652223 | 7e9f900b282a6 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| docker.io/localhost/my-image                | functional-652223 | c9ecbc09d18be | 1.24MB |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| docker.io/library/nginx                     | alpine            | c433c51bbd661 | 40.7MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-652223 image ls --format json:
[{"id":"7e9f900b282a664703b1221c1ff95f27d84d01fc900937590c883fe3df28ee32","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-652223"],"size":"30"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d
7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"
],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"c9ecbc09d18be54f78cf84fdf6eaece1cb410d1f96ba721c7160e38789bc1cf6","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-652223"],"size":"1240000"},{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"5185b96f0becf59032b8e3646e99f8
4d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-652223"],"size":"32900000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-652223 image ls --format yaml:
- id: c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-652223
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 7e9f900b282a664703b1221c1ff95f27d84d01fc900937590c883fe3df28ee32
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-652223
size: "30"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652223 ssh pgrep buildkitd: exit status 1 (637.582515ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image build -t localhost/my-image:functional-652223 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-652223 image build -t localhost/my-image:functional-652223 testdata/build: (2.755628587s)
functional_test.go:316: (dbg) Stdout: out/minikube-linux-amd64 -p functional-652223 image build -t localhost/my-image:functional-652223 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 39d7c2edef3d
Removing intermediate container 39d7c2edef3d
---> fb66cdf4ba05
Step 3/3 : ADD content.txt /
---> c9ecbc09d18b
Successfully built c9ecbc09d18b
Successfully tagged localhost/my-image:functional-652223
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-652223
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-652223 docker-env) && out/minikube-linux-amd64 status -p functional-652223"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-652223 docker-env) && out/minikube-linux-amd64 status -p functional-652223": (1.490465512s)
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-652223 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image load --daemon gcr.io/google-containers/addon-resizer:functional-652223

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-652223 image load --daemon gcr.io/google-containers/addon-resizer:functional-652223: (4.93671511s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-652223 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-652223 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fe7fdc89-543d-4aa8-a60f-d246ae32c161] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:344: "nginx-svc" [fe7fdc89-543d-4aa8-a60f-d246ae32c161] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.011116624s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image load --daemon gcr.io/google-containers/addon-resizer:functional-652223

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-652223 image load --daemon gcr.io/google-containers/addon-resizer:functional-652223: (2.589983269s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-652223
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image load --daemon gcr.io/google-containers/addon-resizer:functional-652223

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-652223 image load --daemon gcr.io/google-containers/addon-resizer:functional-652223: (5.074274588s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-652223 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.98.39.115 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-652223 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image save gcr.io/google-containers/addon-resizer:functional-652223 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-652223 image save gcr.io/google-containers/addon-resizer:functional-652223 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (2.565929953s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image rm gcr.io/google-containers/addon-resizer:functional-652223
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-652223 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.289947944s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-652223
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 image save --daemon gcr.io/google-containers/addon-resizer:functional-652223

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-652223 image save --daemon gcr.io/google-containers/addon-resizer:functional-652223: (3.016177989s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-652223
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-652223 /tmp/TestFunctionalparallelMountCmdany-port1467039711/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1675462501214856972" to /tmp/TestFunctionalparallelMountCmdany-port1467039711/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1675462501214856972" to /tmp/TestFunctionalparallelMountCmdany-port1467039711/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1675462501214856972" to /tmp/TestFunctionalparallelMountCmdany-port1467039711/001/test-1675462501214856972
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652223 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (577.304439ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb  3 22:15 created-by-test
-rw-r--r-- 1 docker docker 24 Feb  3 22:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb  3 22:15 test-1675462501214856972
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh cat /mount-9p/test-1675462501214856972

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-652223 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c509d423-5ad2-4f6d-98a9-565880dddc93] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [c509d423-5ad2-4f6d-98a9-565880dddc93] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [c509d423-5ad2-4f6d-98a9-565880dddc93] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [c509d423-5ad2-4f6d-98a9-565880dddc93] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007530642s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-652223 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-652223 /tmp/TestFunctionalparallelMountCmdany-port1467039711/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1311: Took "563.437056ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "84.815018ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1362: Took "590.861183ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "83.998514ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-652223 /tmp/TestFunctionalparallelMountCmdspecific-port1912263845/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652223 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (604.881523ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2023/02/03 22:15:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "findmnt -T /mount-9p | grep 9p"
E0203 22:15:12.817661  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh -- ls -la /mount-9p
E0203 22:15:13.298706  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-652223 /tmp/TestFunctionalparallelMountCmdspecific-port1912263845/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-652223 ssh "sudo umount -f /mount-9p"
E0203 22:15:13.939487  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-652223 ssh "sudo umount -f /mount-9p": exit status 1 (503.382437ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-652223 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-652223 /tmp/TestFunctionalparallelMountCmdspecific-port1912263845/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
E0203 22:15:15.219682  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
E0203 22:15:17.780427  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.07s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-652223
--- PASS: TestFunctional/delete_addon-resizer_images (0.18s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-652223
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-652223
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-858089
image_test.go:73: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-858089: (1.054003658s)
--- PASS: TestImageBuild/serial/NormalBuild (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.17s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-858089
image_test.go:94: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-858089: (1.168580025s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-858089
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-858089
E0203 22:15:53.622998  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (56.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-119475 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0203 22:16:34.583763  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-119475 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (56.754239304s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (56.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-119475 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-119475 addons enable ingress --alsologtostderr -v=5: (11.34832945s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-119475 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (34.1s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-119475 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-119475 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.190986172s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-119475 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-119475 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2e1bbeb0-f99e-474e-99d8-ef75a6411ff1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2e1bbeb0-f99e-474e-99d8-ef75a6411ff1] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.007584013s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-119475 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-119475 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-119475 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-119475 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-119475 addons disable ingress-dns --alsologtostderr -v=1: (1.891735394s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-119475 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-119475 addons disable ingress --alsologtostderr -v=1: (7.481872562s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (34.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-212458 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0203 22:17:56.504006  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-212458 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (46.052947606s)
--- PASS: TestJSONOutput/start/Command (46.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-212458 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-212458 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-212458 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-212458 --output=json --user=testUser: (5.974043702s)
--- PASS: TestJSONOutput/stop/Command (5.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.53s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-953359 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-953359 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (98.508849ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cf51dddd-5080-4088-a6d1-dfc4480220c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-953359] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c361394d-e6ef-47d4-9b9e-e1c814479f4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15770"}}
	{"specversion":"1.0","id":"c7416ae1-65ca-4433-86da-6690338a9f92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6aa8ee95-636d-4601-b214-a4fc5e06db06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig"}}
	{"specversion":"1.0","id":"41e4734a-ac70-43b7-b7ba-9881d5c077f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube"}}
	{"specversion":"1.0","id":"7a09e0cd-a5a3-451e-8545-d2d3850001cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cbf98635-c2c1-42a8-8402-89189348feff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e30a8669-7933-478a-91bd-d99baece4eed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-953359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-953359
--- PASS: TestErrorJSONOutput (0.53s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-588249 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-588249 --network=: (28.388932723s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-588249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-588249
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-588249: (2.746955879s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.20s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-985314 --network=bridge
E0203 22:19:36.146998  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
E0203 22:19:36.152371  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
E0203 22:19:36.162684  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
E0203 22:19:36.183040  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
E0203 22:19:36.223427  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
E0203 22:19:36.303806  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
E0203 22:19:36.464188  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
E0203 22:19:36.784960  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
E0203 22:19:37.425666  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-985314 --network=bridge: (27.799659493s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-985314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-985314
E0203 22:19:38.706291  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-985314: (2.61802608s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.49s)

                                                
                                    
x
+
TestKicExistingNetwork (31.27s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
E0203 22:19:41.267268  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-889844 --network=existing-network
E0203 22:19:46.387787  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
E0203 22:19:56.628130  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-889844 --network=existing-network: (28.240430101s)
helpers_test.go:175: Cleaning up "existing-network-889844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-889844
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-889844: (2.575282928s)
--- PASS: TestKicExistingNetwork (31.27s)

                                                
                                    
x
+
TestKicCustomSubnet (30.44s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-088818 --subnet=192.168.60.0/24
E0203 22:20:12.659300  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
E0203 22:20:17.108451  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-088818 --subnet=192.168.60.0/24: (27.978569882s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-088818 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-088818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-088818
E0203 22:20:40.344322  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-088818: (2.395658392s)
--- PASS: TestKicCustomSubnet (30.44s)

                                                
                                    
x
+
TestKicStaticIP (31.36s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-753488 --static-ip=192.168.200.200
E0203 22:20:58.068786  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-753488 --static-ip=192.168.200.200: (28.266395379s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-753488 ip
helpers_test.go:175: Cleaning up "static-ip-753488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-753488
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-753488: (2.811637442s)
--- PASS: TestKicStaticIP (31.36s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (65.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-016776 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-016776 --driver=docker  --container-runtime=docker: (28.210773857s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-020102 --driver=docker  --container-runtime=docker
E0203 22:22:05.269493  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
E0203 22:22:05.274858  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
E0203 22:22:05.285229  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
E0203 22:22:05.305604  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
E0203 22:22:05.345957  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
E0203 22:22:05.426269  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
E0203 22:22:05.586709  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
E0203 22:22:05.907315  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
E0203 22:22:06.548093  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
E0203 22:22:07.828407  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
E0203 22:22:10.389204  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-020102 --driver=docker  --container-runtime=docker: (30.111565224s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-016776
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-020102
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-020102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-020102
E0203 22:22:15.510115  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-020102: (2.783262191s)
helpers_test.go:175: Cleaning up "first-016776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-016776
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-016776: (2.883444919s)
--- PASS: TestMinikubeProfile (65.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-599174 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0203 22:22:19.988996  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
E0203 22:22:25.751223  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-599174 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.83386029s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.5s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-599174 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.50s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-623272 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-623272 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.691761526s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.5s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-623272 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.50s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.19s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-599174 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-599174 --alsologtostderr -v=5: (2.189140582s)
--- PASS: TestMountStart/serial/DeleteFirst (2.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.5s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-623272 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.50s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-623272
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-623272: (1.436023254s)
--- PASS: TestMountStart/serial/Stop (1.44s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.17s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-623272
E0203 22:22:46.231360  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-623272: (7.16751674s)
--- PASS: TestMountStart/serial/RestartStopped (8.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.5s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-623272 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.50s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (63.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-447703 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0203 22:23:27.191875  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-447703 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m2.60231968s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (63.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-447703 -- rollout status deployment/busybox: (4.82132372s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- exec busybox-6b86dd6d48-l7978 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- exec busybox-6b86dd6d48-zbkpr -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- exec busybox-6b86dd6d48-l7978 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- exec busybox-6b86dd6d48-zbkpr -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- exec busybox-6b86dd6d48-l7978 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- exec busybox-6b86dd6d48-zbkpr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- exec busybox-6b86dd6d48-l7978 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- exec busybox-6b86dd6d48-l7978 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- exec busybox-6b86dd6d48-zbkpr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-447703 -- exec busybox-6b86dd6d48-zbkpr -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-447703 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-447703 -v 3 --alsologtostderr: (17.592120758s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-linux-amd64 -p multinode-447703 status --alsologtostderr: (1.216711715s)
--- PASS: TestMultiNode/serial/AddNode (18.81s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (18.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-linux-amd64 -p multinode-447703 status --output json --alsologtostderr: (1.169250966s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 cp testdata/cp-test.txt multinode-447703:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 cp multinode-447703:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1040420247/001/cp-test_multinode-447703.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 cp multinode-447703:/home/docker/cp-test.txt multinode-447703-m02:/home/docker/cp-test_multinode-447703_multinode-447703-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703-m02 "sudo cat /home/docker/cp-test_multinode-447703_multinode-447703-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 cp multinode-447703:/home/docker/cp-test.txt multinode-447703-m03:/home/docker/cp-test_multinode-447703_multinode-447703-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703-m03 "sudo cat /home/docker/cp-test_multinode-447703_multinode-447703-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 cp testdata/cp-test.txt multinode-447703-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 cp multinode-447703-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1040420247/001/cp-test_multinode-447703-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 cp multinode-447703-m02:/home/docker/cp-test.txt multinode-447703:/home/docker/cp-test_multinode-447703-m02_multinode-447703.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703 "sudo cat /home/docker/cp-test_multinode-447703-m02_multinode-447703.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 cp multinode-447703-m02:/home/docker/cp-test.txt multinode-447703-m03:/home/docker/cp-test_multinode-447703-m02_multinode-447703-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703-m03 "sudo cat /home/docker/cp-test_multinode-447703-m02_multinode-447703-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 cp testdata/cp-test.txt multinode-447703-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 cp multinode-447703-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1040420247/001/cp-test_multinode-447703-m03.txt
E0203 22:24:36.146640  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 cp multinode-447703-m03:/home/docker/cp-test.txt multinode-447703:/home/docker/cp-test_multinode-447703-m03_multinode-447703.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703 "sudo cat /home/docker/cp-test_multinode-447703-m03_multinode-447703.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 cp multinode-447703-m03:/home/docker/cp-test.txt multinode-447703-m02:/home/docker/cp-test_multinode-447703-m03_multinode-447703-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 ssh -n multinode-447703-m02 "sudo cat /home/docker/cp-test_multinode-447703-m03_multinode-447703-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (18.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-447703 node stop m03: (1.440732483s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-447703 status: exit status 7 (934.315467ms)

                                                
                                                
-- stdout --
	multinode-447703
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-447703-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-447703-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-447703 status --alsologtostderr: exit status 7 (935.249602ms)

                                                
                                                
-- stdout --
	multinode-447703
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-447703-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-447703-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 22:24:42.998723  818268 out.go:296] Setting OutFile to fd 1 ...
	I0203 22:24:42.998949  818268 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:24:42.998959  818268 out.go:309] Setting ErrFile to fd 2...
	I0203 22:24:42.998966  818268 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:24:42.999109  818268 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15770-643340/.minikube/bin
	I0203 22:24:42.999324  818268 out.go:303] Setting JSON to false
	I0203 22:24:42.999367  818268 mustload.go:65] Loading cluster: multinode-447703
	I0203 22:24:42.999414  818268 notify.go:220] Checking for updates...
	I0203 22:24:42.999771  818268 config.go:180] Loaded profile config "multinode-447703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:24:42.999791  818268 status.go:255] checking status of multinode-447703 ...
	I0203 22:24:43.000170  818268 cli_runner.go:164] Run: docker container inspect multinode-447703 --format={{.State.Status}}
	I0203 22:24:43.073802  818268 status.go:330] multinode-447703 host status = "Running" (err=<nil>)
	I0203 22:24:43.073834  818268 host.go:66] Checking if "multinode-447703" exists ...
	I0203 22:24:43.074111  818268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-447703
	I0203 22:24:43.144851  818268 host.go:66] Checking if "multinode-447703" exists ...
	I0203 22:24:43.145181  818268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 22:24:43.145238  818268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-447703
	I0203 22:24:43.212513  818268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/multinode-447703/id_rsa Username:docker}
	I0203 22:24:43.301337  818268 ssh_runner.go:195] Run: systemctl --version
	I0203 22:24:43.305204  818268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 22:24:43.314635  818268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 22:24:43.444148  818268 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:42 SystemTime:2023-02-03 22:24:43.434348275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:31aa4358a36870b21a992d3ad2bef29e1d693bec Expected:31aa4358a36870b21a992d3ad2bef29e1d693bec} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 22:24:43.444750  818268 kubeconfig.go:92] found "multinode-447703" server: "https://192.168.58.2:8443"
	I0203 22:24:43.444785  818268 api_server.go:165] Checking apiserver status ...
	I0203 22:24:43.444824  818268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 22:24:43.454923  818268 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2104/cgroup
	I0203 22:24:43.463098  818268 api_server.go:181] apiserver freezer: "5:freezer:/docker/a95e865578ba7a2b4c76ab874adbe85b567d374b57ef70ac70a0c4b6434ee0dc/kubepods/burstable/podadf4091d6e3cffe30c5074ab6cee0c51/de4d73ab89b223a0c001bf13a6e57602ab8322e5f397b649c1f3268e133cc0a6"
	I0203 22:24:43.463177  818268 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a95e865578ba7a2b4c76ab874adbe85b567d374b57ef70ac70a0c4b6434ee0dc/kubepods/burstable/podadf4091d6e3cffe30c5074ab6cee0c51/de4d73ab89b223a0c001bf13a6e57602ab8322e5f397b649c1f3268e133cc0a6/freezer.state
	I0203 22:24:43.470246  818268 api_server.go:203] freezer state: "THAWED"
	I0203 22:24:43.470280  818268 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0203 22:24:43.474835  818268 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0203 22:24:43.474862  818268 status.go:421] multinode-447703 apiserver status = Running (err=<nil>)
	I0203 22:24:43.474884  818268 status.go:257] multinode-447703 status: &{Name:multinode-447703 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 22:24:43.474907  818268 status.go:255] checking status of multinode-447703-m02 ...
	I0203 22:24:43.475152  818268 cli_runner.go:164] Run: docker container inspect multinode-447703-m02 --format={{.State.Status}}
	I0203 22:24:43.546445  818268 status.go:330] multinode-447703-m02 host status = "Running" (err=<nil>)
	I0203 22:24:43.546477  818268 host.go:66] Checking if "multinode-447703-m02" exists ...
	I0203 22:24:43.546794  818268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-447703-m02
	I0203 22:24:43.615213  818268 host.go:66] Checking if "multinode-447703-m02" exists ...
	I0203 22:24:43.615463  818268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 22:24:43.615537  818268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-447703-m02
	I0203 22:24:43.687050  818268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/15770-643340/.minikube/machines/multinode-447703-m02/id_rsa Username:docker}
	I0203 22:24:43.777181  818268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 22:24:43.786493  818268 status.go:257] multinode-447703-m02 status: &{Name:multinode-447703-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0203 22:24:43.786529  818268 status.go:255] checking status of multinode-447703-m03 ...
	I0203 22:24:43.786831  818268 cli_runner.go:164] Run: docker container inspect multinode-447703-m03 --format={{.State.Status}}
	I0203 22:24:43.858848  818268 status.go:330] multinode-447703-m03 host status = "Stopped" (err=<nil>)
	I0203 22:24:43.858879  818268 status.go:343] host is not running, skipping remaining checks
	I0203 22:24:43.858889  818268 status.go:257] multinode-447703-m03 status: &{Name:multinode-447703-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 node start m03 --alsologtostderr
E0203 22:24:49.112123  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-447703 node start m03 --alsologtostderr: (12.281978009s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 status
multinode_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p multinode-447703 status: (1.187590256s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (120.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-447703
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-447703
E0203 22:25:03.829872  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
E0203 22:25:12.662075  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-447703: (23.076270506s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-447703 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-447703 --wait=true -v=8 --alsologtostderr: (1m37.59714105s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-447703
--- PASS: TestMultiNode/serial/RestartKeepsNodes (120.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-447703 node delete m03: (5.387732373s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 stop
E0203 22:27:05.269545  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-447703 stop: (21.829021318s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-447703 status: exit status 7 (220.461363ms)

                                                
                                                
-- stdout --
	multinode-447703
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-447703-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-447703 status --alsologtostderr: exit status 7 (212.052366ms)

                                                
                                                
-- stdout --
	multinode-447703
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-447703-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 22:27:26.867346  840833 out.go:296] Setting OutFile to fd 1 ...
	I0203 22:27:26.867555  840833 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:27:26.867565  840833 out.go:309] Setting ErrFile to fd 2...
	I0203 22:27:26.867570  840833 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 22:27:26.867683  840833 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15770-643340/.minikube/bin
	I0203 22:27:26.867852  840833 out.go:303] Setting JSON to false
	I0203 22:27:26.867882  840833 mustload.go:65] Loading cluster: multinode-447703
	I0203 22:27:26.868006  840833 notify.go:220] Checking for updates...
	I0203 22:27:26.868388  840833 config.go:180] Loaded profile config "multinode-447703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 22:27:26.868414  840833 status.go:255] checking status of multinode-447703 ...
	I0203 22:27:26.868972  840833 cli_runner.go:164] Run: docker container inspect multinode-447703 --format={{.State.Status}}
	I0203 22:27:26.939045  840833 status.go:330] multinode-447703 host status = "Stopped" (err=<nil>)
	I0203 22:27:26.939073  840833 status.go:343] host is not running, skipping remaining checks
	I0203 22:27:26.939082  840833 status.go:257] multinode-447703 status: &{Name:multinode-447703 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 22:27:26.939126  840833 status.go:255] checking status of multinode-447703-m02 ...
	I0203 22:27:26.939376  840833 cli_runner.go:164] Run: docker container inspect multinode-447703-m02 --format={{.State.Status}}
	I0203 22:27:27.006997  840833 status.go:330] multinode-447703-m02 host status = "Stopped" (err=<nil>)
	I0203 22:27:27.007045  840833 status.go:343] host is not running, skipping remaining checks
	I0203 22:27:27.007055  840833 status.go:257] multinode-447703-m02 status: &{Name:multinode-447703-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (62.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-447703 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0203 22:27:32.952363  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-447703 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m1.714643245s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-447703 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (62.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-447703
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-447703-m02 --driver=docker  --container-runtime=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-447703-m02 --driver=docker  --container-runtime=docker: exit status 14 (109.93267ms)

                                                
                                                
-- stdout --
	* [multinode-447703-m02] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15770
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-447703-m02' is duplicated with machine name 'multinode-447703-m02' in profile 'multinode-447703'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-447703-m03 --driver=docker  --container-runtime=docker
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-447703-m03 --driver=docker  --container-runtime=docker: (28.122230948s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-447703
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-447703: exit status 80 (471.617984ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-447703
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-447703-m03 already exists in multinode-447703-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-447703-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-447703-m03: (2.835231333s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.61s)

                                                
                                    
x
+
TestPreload (121.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-989343 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0203 22:29:36.146839  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-989343 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m4.028111558s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-989343 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-989343 -- docker pull gcr.io/k8s-minikube/busybox: (1.030098225s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-989343
E0203 22:30:12.659419  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-989343: (11.008369477s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-989343 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-989343 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (42.068228109s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-989343 -- docker images
helpers_test.go:175: Cleaning up "test-preload-989343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-989343
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-989343: (2.900020244s)
--- PASS: TestPreload (121.57s)

                                                
                                    
x
+
TestScheduledStopUnix (103.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-481589 --memory=2048 --driver=docker  --container-runtime=docker
E0203 22:31:35.704799  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-481589 --memory=2048 --driver=docker  --container-runtime=docker: (28.291573472s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-481589 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-481589 -n scheduled-stop-481589
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-481589 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-481589 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-481589 -n scheduled-stop-481589
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-481589
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-481589 --schedule 15s
E0203 22:32:05.269526  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-481589
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-481589: exit status 7 (146.594476ms)

                                                
                                                
-- stdout --
	scheduled-stop-481589
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-481589 -n scheduled-stop-481589
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-481589 -n scheduled-stop-481589: exit status 7 (147.881643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-481589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-481589
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-481589: (2.354995149s)
--- PASS: TestScheduledStopUnix (103.08s)

                                                
                                    
x
+
TestSkaffold (58.12s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2672098832 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-339243 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-339243 --memory=2600 --driver=docker  --container-runtime=docker: (28.593341605s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2672098832 run --minikube-profile skaffold-339243 --kube-context skaffold-339243 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2672098832 run --minikube-profile skaffold-339243 --kube-context skaffold-339243 --status-check=true --port-forward=false --interactive=false: (15.730471082s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5b669bf6f-5bh27" [a219dae0-fd08-4e4a-8d35-1d5afacf93f7] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011012578s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7f44586fcb-b94df" [d8213b9f-a4b1-41bd-bc6a-d0240643d244] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006940075s
helpers_test.go:175: Cleaning up "skaffold-339243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-339243
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-339243: (3.088546965s)
--- PASS: TestSkaffold (58.12s)

                                                
                                    
x
+
TestInsufficientStorage (13.76s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-832689 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-832689 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.360220717s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1e8216a3-aa4d-49af-ba37-81e80d142b49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-832689] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d4b2094e-ba78-46a9-a3a3-a316fc0e6833","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15770"}}
	{"specversion":"1.0","id":"322ca4f8-c95d-4f2d-ab91-c2d5e984b945","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"915483d8-2f62-4966-b12c-f71f83942dd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig"}}
	{"specversion":"1.0","id":"709ad861-63b2-44b2-b510-691d1214cf18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube"}}
	{"specversion":"1.0","id":"2620a826-6a1b-4431-9d5c-e16fac56e2c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"49cde91c-e10a-45a0-aa1e-aa9689cc0daf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a0f0c201-213b-4a02-81fe-14ac99fc8088","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4c98a4b9-fc4c-43c1-896b-a1918a7eeec6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"53dd9031-29d7-4ca5-ae11-9a6edb43627f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a4f5a6d-b11b-4313-8c13-8e28fca03f90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"82885cb0-29da-49a1-81db-8fae380445b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-832689 in cluster insufficient-storage-832689","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"80fa635a-8222-4161-88ef-e0ff6276b1da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1041029c-655e-42d6-9d6f-901c6a3648bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5affcf03-3da6-4b77-91b7-341fff24b9c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-832689 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-832689 --output=json --layout=cluster: exit status 7 (512.248607ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-832689","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-832689","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 22:34:00.235383  888518 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-832689" does not appear in /home/jenkins/minikube-integration/15770-643340/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-832689 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-832689 --output=json --layout=cluster: exit status 7 (511.527283ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-832689","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-832689","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 22:34:00.748054  888714 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-832689" does not appear in /home/jenkins/minikube-integration/15770-643340/kubeconfig
	E0203 22:34:00.756898  888714 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/insufficient-storage-832689/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-832689" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-832689
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-832689: (2.375792005s)
--- PASS: TestInsufficientStorage (13.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (94.77s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.9.0.3599635012.exe start -p running-upgrade-086031 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0203 22:35:59.193651  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.9.0.3599635012.exe start -p running-upgrade-086031 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m10.1862251s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-086031 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-086031 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.363539151s)
helpers_test.go:175: Cleaning up "running-upgrade-086031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-086031

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-086031: (2.839344997s)
--- PASS: TestRunningBinaryUpgrade (94.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (103.58s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-955330 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-955330 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.303334436s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-955330

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-955330: (4.471791461s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-955330 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-955330 status --format={{.Host}}: exit status 7 (203.996141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-955330 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-955330 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.888548921s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-955330 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-955330 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-955330 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (126.627132ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-955330] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15770
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-955330
	    minikube start -p kubernetes-upgrade-955330 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9553302 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-955330 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-955330 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-955330 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (20.192946527s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-955330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-955330
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-955330: (3.313322432s)
--- PASS: TestKubernetesUpgrade (103.58s)

                                                
                                    
x
+
TestMissingContainerUpgrade (113.49s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /tmp/minikube-v1.9.1.1734765512.exe start -p missing-upgrade-993444 --memory=2200 --driver=docker  --container-runtime=docker
E0203 22:34:36.147268  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Done: /tmp/minikube-v1.9.1.1734765512.exe start -p missing-upgrade-993444 --memory=2200 --driver=docker  --container-runtime=docker: (1m3.9579342s)
version_upgrade_test.go:326: (dbg) Run:  docker stop missing-upgrade-993444

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:326: (dbg) Done: docker stop missing-upgrade-993444: (1.955348897s)
version_upgrade_test.go:331: (dbg) Run:  docker rm missing-upgrade-993444

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:337: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-993444 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:337: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-993444 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (43.352490902s)
helpers_test.go:175: Cleaning up "missing-upgrade-993444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-993444

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-993444: (2.745109894s)
--- PASS: TestMissingContainerUpgrade (113.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-805530 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-805530 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (114.890859ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-805530] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15770
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15770-643340/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15770-643340/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-805530 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-805530 --driver=docker  --container-runtime=docker: (48.876915536s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-805530 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (87.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.9.0.3513215369.exe start -p stopped-upgrade-903008 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.9.0.3513215369.exe start -p stopped-upgrade-903008 --memory=2200 --vm-driver=docker  --container-runtime=docker: (53.055004268s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.9.0.3513215369.exe -p stopped-upgrade-903008 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.9.0.3513215369.exe -p stopped-upgrade-903008 stop: (12.58711244s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-903008 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-903008 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.243917533s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (87.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-805530 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-805530 --no-kubernetes --driver=docker  --container-runtime=docker: (15.233259284s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-805530 status -o json

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-805530 status -o json: exit status 2 (663.452348ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-805530","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-805530

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-805530: (3.314172619s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-805530 --no-kubernetes --driver=docker  --container-runtime=docker
E0203 22:35:12.659386  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-805530 --no-kubernetes --driver=docker  --container-runtime=docker: (10.26031185s)
--- PASS: TestNoKubernetes/serial/Start (10.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-805530 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-805530 "sudo systemctl is-active --quiet service kubelet": exit status 1 (572.906218ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (13.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (7.116801103s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (6.065760849s)
--- PASS: TestNoKubernetes/serial/ProfileList (13.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-903008
version_upgrade_test.go:214: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-903008: (1.873546838s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-805530
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-805530: (1.629502778s)
--- PASS: TestNoKubernetes/serial/Stop (1.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-805530 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-805530 --driver=docker  --container-runtime=docker: (8.397410359s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-805530 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-805530 "sudo systemctl is-active --quiet service kubelet": exit status 1 (665.346927ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.67s)

                                                
                                    
x
+
TestPause/serial/Start (51.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-868256 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-868256 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (51.266652763s)
--- PASS: TestPause/serial/Start (51.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (47.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (47.737806111s)
--- PASS: TestNetworkPlugins/group/auto/Start (47.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (51.88630827s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-770968 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-770968 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-vx647" [ae240483-0333-4bf3-b016-911caf84868f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:344: "netcat-694fc96674-vx647" [ae240483-0333-4bf3-b016-911caf84868f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.006690845s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0203 22:38:28.312953  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m14.145532894s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-770968 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-d9zqf" [9f9a3f05-02a6-42df-834f-4c97666c5132] Running
E0203 22:38:56.744208  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/skaffold-339243/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.016985527s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-770968 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (52.395859772s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-770968 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-sxspj" [6d1a1a5b-cf8a-48b9-9c31-12c806c1e24d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-sxspj" [6d1a1a5b-cf8a-48b9-9c31-12c806c1e24d] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.00829299s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-770968 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xsjrv" [cd6984eb-8123-4316-8a16-c22976f4aa31] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.019602306s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (46.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p false-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p false-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (46.207012041s)
--- PASS: TestNetworkPlugins/group/false/Start (46.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-770968 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-770968 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-tktn9" [00f6aad3-2c94-4a81-8de5-7379732a71ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:344: "netcat-694fc96674-tktn9" [00f6aad3-2c94-4a81-8de5-7379732a71ec] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.010394337s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-770968 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-770968 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-bhstx" [df2f9386-3575-4abd-8636-c70c2dd38256] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/NetCatPod
helpers_test.go:344: "netcat-694fc96674-bhstx" [df2f9386-3575-4abd-8636-c70c2dd38256] Running
E0203 22:39:58.185885  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/skaffold-339243/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.006842209s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-770968 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-770968 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-770968 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-770968 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-5mh9k" [36c2a805-d26a-4386-81f0-b27335799e22] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:344: "netcat-694fc96674-5mh9k" [36c2a805-d26a-4386-81f0-b27335799e22] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.007879997s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (52.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (52.614878122s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (52.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (57.657460522s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-770968 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m0.154587015s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (47.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0203 22:41:20.107086  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/skaffold-339243/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-770968 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (47.96044709s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (47.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-770968 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-770968 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-plhmb" [d0e10759-3836-4e65-a7b1-208c902f3556] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-plhmb" [d0e10759-3836-4e65-a7b1-208c902f3556] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.008269319s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7nps9" [b625ff68-0a0f-452a-bf6e-e5b340c9569f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.014230951s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-770968 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-770968 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-770968 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-4r2xd" [8bc14101-b40a-45db-9012-082199690111] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-4r2xd" [8bc14101-b40a-45db-9012-082199690111] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.007194403s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-770968 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-770968 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-770968 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-blhmd" [babba239-15da-4283-a3ec-907a36b6fe9b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-blhmd" [babba239-15da-4283-a3ec-907a36b6fe9b] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006237361s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-770968 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-770968 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-hjskz" [27cd3e14-62da-4a51-9ff3-490e48794097] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:344: "netcat-694fc96674-hjskz" [27cd3e14-62da-4a51-9ff3-490e48794097] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.006912399s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-770968 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (133.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-503399 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-503399 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m13.520858674s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (133.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-770968 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-770968 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.19s)
E0203 22:47:25.814622  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:47:34.180310  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:47:37.319894  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:47:45.356453  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
E0203 22:47:49.614193  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:47:58.096016  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
E0203 22:48:15.141051  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:48:15.627653  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
E0203 22:48:15.705897  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
E0203 22:48:20.211408  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:48:26.317495  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
E0203 22:48:36.257353  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/skaffold-339243/client.crt: no such file or directory
E0203 22:48:47.894954  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:48:54.776560  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:49:11.534460  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-784178 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-784178 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (56.515364884s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-517967 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-517967 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (54.483464416s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-931384 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0203 22:43:20.210824  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:43:20.216091  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:43:20.226328  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:43:20.246580  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:43:20.286759  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:43:20.367525  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:43:20.528138  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:43:20.849154  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:43:21.489628  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:43:22.770197  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-931384 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (1m23.603180872s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-784178 create -f testdata/busybox.yaml
E0203 22:43:25.331091  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [56582b2f-a3a4-4c51-8198-7d376a13cad3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [56582b2f-a3a4-4c51-8198-7d376a13cad3] Running
E0203 22:43:30.451707  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.014567339s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-784178 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-784178 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-784178 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-784178 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-784178 --alsologtostderr -v=3: (11.161512112s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-517967 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3979ad7a-1c91-438c-8023-985a22da67ed] Pending
helpers_test.go:344: "busybox" [3979ad7a-1c91-438c-8023-985a22da67ed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0203 22:43:36.257539  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/skaffold-339243/client.crt: no such file or directory
helpers_test.go:344: "busybox" [3979ad7a-1c91-438c-8023-985a22da67ed] Running
E0203 22:43:40.692374  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.015051875s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-517967 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-517967 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-517967 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-517967 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-517967 --alsologtostderr -v=3: (11.21972886s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-784178 -n no-preload-784178
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-784178 -n no-preload-784178: exit status 7 (172.624333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-784178 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (334.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-784178 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0203 22:43:54.776227  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:43:54.782407  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:43:54.792753  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:43:54.813123  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:43:54.853517  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:43:54.933888  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:43:55.094945  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:43:55.415965  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-784178 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (5m33.82686327s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-784178 -n no-preload-784178
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (334.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-517967 -n embed-certs-517967
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-517967 -n embed-certs-517967: exit status 7 (147.474151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-517967 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (565.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-517967 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0203 22:43:56.056456  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:43:57.337518  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:43:59.897713  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:44:01.173486  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:44:03.947706  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/skaffold-339243/client.crt: no such file or directory
E0203 22:44:05.017941  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:44:15.259068  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-517967 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (9m25.014109962s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-517967 -n embed-certs-517967
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (565.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-931384 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5ed3584c-e349-4722-8425-6cf387685e13] Pending
helpers_test.go:344: "busybox" [5ed3584c-e349-4722-8425-6cf387685e13] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5ed3584c-e349-4722-8425-6cf387685e13] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.015114489s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-931384 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-931384 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-931384 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-503399 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6105ab5d-8610-40a7-b105-71aacc4ae10e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:344: "busybox" [6105ab5d-8610-40a7-b105-71aacc4ae10e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.012968682s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-503399 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-931384 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-931384 --alsologtostderr -v=3: (11.118293779s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-503399 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-503399 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-503399 --alsologtostderr -v=3
E0203 22:44:35.740079  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:44:36.147259  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-503399 --alsologtostderr -v=3: (11.155507628s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-931384 -n default-k8s-diff-port-931384
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-931384 -n default-k8s-diff-port-931384: exit status 7 (163.392141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-931384 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (313.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-931384 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0203 22:44:41.970882  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:44:41.976197  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:44:41.986461  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:44:42.006809  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:44:42.047139  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:44:42.127667  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:44:42.133915  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:44:42.288034  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:44:42.608582  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:44:43.249385  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-931384 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (5m13.070610164s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-931384 -n default-k8s-diff-port-931384
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (313.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503399 -n old-k8s-version-503399
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503399 -n old-k8s-version-503399: exit status 7 (178.221868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-503399 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (36.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-503399 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0203 22:44:44.530421  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:44:47.091610  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:44:52.212038  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:44:53.476838  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:44:53.482136  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:44:53.492434  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:44:53.512702  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:44:53.553057  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:44:53.633458  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:44:53.793912  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:44:54.114482  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:44:54.755619  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:44:56.036114  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:44:58.596679  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:45:02.452503  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:45:03.717341  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:45:12.658894  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/addons-172406/client.crt: no such file or directory
E0203 22:45:13.958321  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:45:16.700431  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-503399 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (35.635387737s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-503399 -n old-k8s-version-503399
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (36.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0203 22:45:22.932665  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-d8vrw" [6adabbe8-7d26-47b0-994c-a7b37e13bece] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0203 22:45:31.784095  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
E0203 22:45:31.789419  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
E0203 22:45:31.799726  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
E0203 22:45:31.820031  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
E0203 22:45:31.860417  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
E0203 22:45:31.940709  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
E0203 22:45:32.101222  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
E0203 22:45:32.422271  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
E0203 22:45:33.063309  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-d8vrw" [6adabbe8-7d26-47b0-994c-a7b37e13bece] Running
E0203 22:45:34.344302  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
E0203 22:45:34.438467  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:45:36.904898  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.011734483s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-d8vrw" [6adabbe8-7d26-47b0-994c-a7b37e13bece] Running
E0203 22:45:42.025462  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006687676s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-503399 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-503399 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-503399 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-503399 -n old-k8s-version-503399
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-503399 -n old-k8s-version-503399: exit status 2 (566.831775ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-503399 -n old-k8s-version-503399
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-503399 -n old-k8s-version-503399: exit status 2 (573.963416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-503399 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-503399 -n old-k8s-version-503399
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-503399 -n old-k8s-version-503399
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-902584 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0203 22:46:03.893774  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
E0203 22:46:04.054276  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/auto-770968/client.crt: no such file or directory
E0203 22:46:12.746571  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
E0203 22:46:15.398690  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
E0203 22:46:27.689553  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:46:27.694869  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:46:27.705155  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:46:27.725507  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:46:27.765809  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:46:27.846137  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:46:28.006671  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:46:28.327830  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:46:28.968564  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:46:30.249011  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:46:32.810108  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-902584 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (42.496942312s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-902584 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0203 22:46:36.172521  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
E0203 22:46:36.177842  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
E0203 22:46:36.188500  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
E0203 22:46:36.209425  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
E0203 22:46:36.250019  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
E0203 22:46:36.330584  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
E0203 22:46:36.491026  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-902584 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.336346448s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-902584 --alsologtostderr -v=3
E0203 22:46:36.811880  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
E0203 22:46:37.452447  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
E0203 22:46:37.930886  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:46:38.621488  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:46:38.732932  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
E0203 22:46:41.293448  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
E0203 22:46:46.413846  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-902584 --alsologtostderr -v=3: (11.112549636s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-902584 -n newest-cni-902584
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-902584 -n newest-cni-902584: exit status 7 (147.352196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-902584 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-902584 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0203 22:46:48.171887  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:46:53.216422  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:46:53.221845  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:46:53.232160  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:46:53.252461  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:46:53.292768  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:46:53.373488  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:46:53.534045  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:46:53.707485  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/false-770968/client.crt: no such file or directory
E0203 22:46:53.854213  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:46:54.494471  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:46:55.775376  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:46:56.654974  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
E0203 22:46:58.336566  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:47:03.457269  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:47:04.394689  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
E0203 22:47:04.399960  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
E0203 22:47:04.410310  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
E0203 22:47:04.430625  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
E0203 22:47:04.470954  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
E0203 22:47:04.551335  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
E0203 22:47:04.712437  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
E0203 22:47:05.032861  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
E0203 22:47:05.270195  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/ingress-addon-legacy-119475/client.crt: no such file or directory
E0203 22:47:05.673031  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
E0203 22:47:06.953275  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
E0203 22:47:08.653090  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/enable-default-cni-770968/client.crt: no such file or directory
E0203 22:47:09.514466  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
E0203 22:47:13.697960  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
E0203 22:47:14.635558  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubenet-770968/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-902584 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (28.723688303s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-902584 -n newest-cni-902584
E0203 22:47:17.135541  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-902584 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-902584 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-902584 -n newest-cni-902584
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-902584 -n newest-cni-902584: exit status 2 (570.44701ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-902584 -n newest-cni-902584
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-902584 -n newest-cni-902584: exit status 2 (575.717161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-902584 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-902584 -n newest-cni-902584
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-902584 -n newest-cni-902584
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-hscq6" [8b445c2b-3e4a-458e-80e5-24d41f91ba32] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0203 22:49:20.017006  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/flannel-770968/client.crt: no such file or directory
E0203 22:49:22.462507  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kindnet-770968/client.crt: no such file or directory
E0203 22:49:24.742224  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/old-k8s-version-503399/client.crt: no such file or directory
E0203 22:49:24.747548  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/old-k8s-version-503399/client.crt: no such file or directory
E0203 22:49:24.757916  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/old-k8s-version-503399/client.crt: no such file or directory
E0203 22:49:24.778354  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/old-k8s-version-503399/client.crt: no such file or directory
E0203 22:49:24.818675  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/old-k8s-version-503399/client.crt: no such file or directory
E0203 22:49:24.899054  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/old-k8s-version-503399/client.crt: no such file or directory
E0203 22:49:25.059515  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/old-k8s-version-503399/client.crt: no such file or directory
E0203 22:49:25.380388  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/old-k8s-version-503399/client.crt: no such file or directory
E0203 22:49:26.021536  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/old-k8s-version-503399/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-hscq6" [8b445c2b-3e4a-458e-80e5-24d41f91ba32] Running
E0203 22:49:27.302611  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/old-k8s-version-503399/client.crt: no such file or directory
E0203 22:49:29.863659  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/old-k8s-version-503399/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.049208168s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-hscq6" [8b445c2b-3e4a-458e-80e5-24d41f91ba32] Running
E0203 22:49:34.983900  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/old-k8s-version-503399/client.crt: no such file or directory
E0203 22:49:36.147151  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/functional-652223/client.crt: no such file or directory
E0203 22:49:37.061236  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/bridge-770968/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006733152s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-784178 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-784178 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-784178 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-784178 -n no-preload-784178
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-784178 -n no-preload-784178: exit status 2 (557.647184ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-784178 -n no-preload-784178
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-784178 -n no-preload-784178: exit status 2 (567.986644ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-784178 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-784178 -n no-preload-784178
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-784178 -n no-preload-784178
E0203 22:49:41.970396  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/calico-770968/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-lm5q8" [1222212a-9b4c-48ef-9abc-efaaa5f9398b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0203 22:49:53.476949  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/custom-flannel-770968/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-lm5q8" [1222212a-9b4c-48ef-9abc-efaaa5f9398b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.014542609s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-lm5q8" [1222212a-9b4c-48ef-9abc-efaaa5f9398b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007441785s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-931384 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-931384 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-931384 --alsologtostderr -v=1
E0203 22:50:05.705544  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/old-k8s-version-503399/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-931384 -n default-k8s-diff-port-931384
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-931384 -n default-k8s-diff-port-931384: exit status 2 (552.322516ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-931384 -n default-k8s-diff-port-931384
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-931384 -n default-k8s-diff-port-931384: exit status 2 (551.452538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-931384 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-931384 -n default-k8s-diff-port-931384
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-931384 -n default-k8s-diff-port-931384
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-r5w2c" [d5097ceb-4f1c-442f-9971-32ed75112e54] Running
E0203 22:53:25.354077  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/no-preload-784178/client.crt: no such file or directory
E0203 22:53:25.359455  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/no-preload-784178/client.crt: no such file or directory
E0203 22:53:25.369855  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/no-preload-784178/client.crt: no such file or directory
E0203 22:53:25.390263  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/no-preload-784178/client.crt: no such file or directory
E0203 22:53:25.430673  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/no-preload-784178/client.crt: no such file or directory
E0203 22:53:25.511081  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/no-preload-784178/client.crt: no such file or directory
E0203 22:53:25.671684  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/no-preload-784178/client.crt: no such file or directory
E0203 22:53:25.992437  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/no-preload-784178/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012444926s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-r5w2c" [d5097ceb-4f1c-442f-9971-32ed75112e54] Running
E0203 22:53:26.633289  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/no-preload-784178/client.crt: no such file or directory
E0203 22:53:27.913466  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/no-preload-784178/client.crt: no such file or directory
E0203 22:53:30.474443  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/no-preload-784178/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006250626s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-517967 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-517967 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-517967 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-517967 -n embed-certs-517967
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-517967 -n embed-certs-517967: exit status 2 (537.032841ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-517967 -n embed-certs-517967
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-517967 -n embed-certs-517967: exit status 2 (533.308217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-517967 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-517967 -n embed-certs-517967
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-517967 -n embed-certs-517967
E0203 22:53:35.594899  650065 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/no-preload-784178/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.84s)

                                                
                                    

Test skip (19/302)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium
panic.go:522: 
----------------------- debugLogs start: cilium-770968 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-770968" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 03 Feb 2023 22:35:37 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-955330
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15770-643340/.minikube/ca.crt
server: https://192.168.85.2:8443
name: missing-upgrade-993444
contexts:
- context:
cluster: kubernetes-upgrade-955330
extensions:
- extension:
last-update: Fri, 03 Feb 2023 22:35:37 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: kubernetes-upgrade-955330
name: kubernetes-upgrade-955330
- context:
cluster: missing-upgrade-993444
user: missing-upgrade-993444
name: missing-upgrade-993444
current-context: kubernetes-upgrade-955330
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-955330
user:
client-certificate: /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubernetes-upgrade-955330/client.crt
client-key: /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/kubernetes-upgrade-955330/client.key
- name: missing-upgrade-993444
user:
client-certificate: /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/missing-upgrade-993444/client.crt
client-key: /home/jenkins/minikube-integration/15770-643340/.minikube/profiles/missing-upgrade-993444/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-770968

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-770968" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770968"

                                                
                                                
----------------------- debugLogs end: cilium-770968 [took: 5.098731482s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-770968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-770968
--- SKIP: TestNetworkPlugins/group/cilium (5.67s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-360492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-360492
--- SKIP: TestStartStop/group/disable-driver-mounts (0.76s)

                                                
                                    
Copied to clipboard