Test Report: KVM_Linux 15565

                    
                      1a22b9432724c1a7c0bfc1f92a18db163006c245:2023-01-28:27621
                    
                

Test fail (2/300)

Order failed test Duration
45 TestErrorSpam/setup 53.82
246 TestPause/serial/SecondStartNoReconfiguration 96.5
x
+
TestErrorSpam/setup (53.82s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-430971 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-430971 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-430971 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-430971 --driver=kvm2 : (53.815229675s)
error_spam_test.go:96: unexpected stderr: "! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0"
error_spam_test.go:110: minikube stdout:
* [nospam-430971] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15565
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting control plane node nospam-430971 in cluster nospam-430971
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-430971" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
--- FAIL: TestErrorSpam/setup (53.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (96.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-539738 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-539738 --alsologtostderr -v=1 --driver=kvm2 : (1m31.324222987s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-539738] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-539738 in cluster pause-539738
	* Updating the running kvm2 "pause-539738" VM ...
	* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Done! kubectl is now configured to use "pause-539738" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 04:06:51.876458   27997 out.go:296] Setting OutFile to fd 1 ...
	I0128 04:06:51.876599   27997 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 04:06:51.876610   27997 out.go:309] Setting ErrFile to fd 2...
	I0128 04:06:51.876617   27997 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 04:06:51.876845   27997 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3903/.minikube/bin
	I0128 04:06:51.877807   27997 out.go:303] Setting JSON to false
	I0128 04:06:51.878637   27997 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2963,"bootTime":1674875849,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0128 04:06:51.878696   27997 start.go:135] virtualization: kvm guest
	I0128 04:06:51.881545   27997 out.go:177] * [pause-539738] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0128 04:06:51.883197   27997 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 04:06:51.883131   27997 notify.go:220] Checking for updates...
	I0128 04:06:51.884706   27997 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 04:06:51.886315   27997 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
	I0128 04:06:51.887792   27997 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
	I0128 04:06:51.889468   27997 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0128 04:06:51.891042   27997 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 04:06:51.893089   27997 config.go:180] Loaded profile config "pause-539738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 04:06:51.893463   27997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 04:06:51.893519   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:06:51.909675   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I0128 04:06:51.910051   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:06:51.910663   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:06:51.910689   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:06:51.910995   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:06:51.911168   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:06:51.911360   27997 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 04:06:51.911764   27997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 04:06:51.911809   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:06:51.927258   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44431
	I0128 04:06:51.927631   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:06:51.928110   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:06:51.928141   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:06:51.928427   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:06:51.928599   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:06:51.968946   27997 out.go:177] * Using the kvm2 driver based on existing profile
	I0128 04:06:51.970365   27997 start.go:296] selected driver: kvm2
	I0128 04:06:51.970384   27997 start.go:840] validating driver "kvm2" against &{Name:pause-539738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.26.1 ClusterName:pause-539738 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.35 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0128 04:06:51.970502   27997 start.go:851] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 04:06:51.970736   27997 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 04:06:51.970796   27997 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15565-3903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0128 04:06:51.985169   27997 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.28.0
	I0128 04:06:51.986047   27997 cni.go:84] Creating CNI manager for ""
	I0128 04:06:51.986069   27997 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 04:06:51.986086   27997 start_flags.go:319] config:
	{Name:pause-539738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:pause-539738 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.35 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0128 04:06:51.986243   27997 iso.go:125] acquiring lock: {Name:mkae097b889f6bf43a43f260cc80a114303c04bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 04:06:51.988216   27997 out.go:177] * Starting control plane node pause-539738 in cluster pause-539738
	I0128 04:06:51.989588   27997 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 04:06:51.989629   27997 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 04:06:51.989645   27997 cache.go:57] Caching tarball of preloaded images
	I0128 04:06:51.989786   27997 preload.go:174] Found /home/jenkins/minikube-integration/15565-3903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 04:06:51.989802   27997 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 04:06:51.989933   27997 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/config.json ...
	I0128 04:06:51.990106   27997 cache.go:193] Successfully downloaded all kic artifacts
	I0128 04:06:51.990127   27997 start.go:364] acquiring machines lock for pause-539738: {Name:mk7ecd094a2b41dd9dbc24234c685e9f8765e635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0128 04:07:10.051661   27997 start.go:368] acquired machines lock for "pause-539738" in 18.061511682s
	I0128 04:07:10.051707   27997 start.go:96] Skipping create...Using existing machine configuration
	I0128 04:07:10.051714   27997 fix.go:55] fixHost starting: 
	I0128 04:07:10.052110   27997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 04:07:10.052160   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:07:10.072029   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
	I0128 04:07:10.072478   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:07:10.072937   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:07:10.072959   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:07:10.073344   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:07:10.073528   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:07:10.073722   27997 main.go:141] libmachine: (pause-539738) Calling .GetState
	I0128 04:07:10.075331   27997 fix.go:103] recreateIfNeeded on pause-539738: state=Running err=<nil>
	W0128 04:07:10.075351   27997 fix.go:129] unexpected machine state, will restart: <nil>
	I0128 04:07:10.077787   27997 out.go:177] * Updating the running kvm2 "pause-539738" VM ...
	I0128 04:07:10.079244   27997 machine.go:88] provisioning docker machine ...
	I0128 04:07:10.079265   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:07:10.079454   27997 main.go:141] libmachine: (pause-539738) Calling .GetMachineName
	I0128 04:07:10.079628   27997 buildroot.go:166] provisioning hostname "pause-539738"
	I0128 04:07:10.079652   27997 main.go:141] libmachine: (pause-539738) Calling .GetMachineName
	I0128 04:07:10.079823   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:07:10.082217   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.082681   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:10.082710   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.082893   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:07:10.083047   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:10.083165   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:10.083277   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:07:10.083464   27997 main.go:141] libmachine: Using SSH client type: native
	I0128 04:07:10.083647   27997 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.61.35 22 <nil> <nil>}
	I0128 04:07:10.083667   27997 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-539738 && echo "pause-539738" | sudo tee /etc/hostname
	I0128 04:07:10.235682   27997 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-539738
	
	I0128 04:07:10.235719   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:07:10.238717   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.239114   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:10.239155   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.239366   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:07:10.239581   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:10.239774   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:10.239918   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:07:10.240106   27997 main.go:141] libmachine: Using SSH client type: native
	I0128 04:07:10.240288   27997 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.61.35 22 <nil> <nil>}
	I0128 04:07:10.240316   27997 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-539738' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-539738/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-539738' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 04:07:10.374099   27997 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 04:07:10.374139   27997 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3903/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3903/.minikube}
	I0128 04:07:10.374161   27997 buildroot.go:174] setting up certificates
	I0128 04:07:10.374185   27997 provision.go:83] configureAuth start
	I0128 04:07:10.374202   27997 main.go:141] libmachine: (pause-539738) Calling .GetMachineName
	I0128 04:07:10.374716   27997 main.go:141] libmachine: (pause-539738) Calling .GetIP
	I0128 04:07:10.378505   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.378896   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:10.378927   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.379308   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:07:10.383632   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.384342   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:10.384374   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.384596   27997 provision.go:138] copyHostCerts
	I0128 04:07:10.384642   27997 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3903/.minikube/key.pem, removing ...
	I0128 04:07:10.384648   27997 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3903/.minikube/key.pem
	I0128 04:07:10.384692   27997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3903/.minikube/key.pem (1679 bytes)
	I0128 04:07:10.384772   27997 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3903/.minikube/ca.pem, removing ...
	I0128 04:07:10.384776   27997 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3903/.minikube/ca.pem
	I0128 04:07:10.384797   27997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3903/.minikube/ca.pem (1078 bytes)
	I0128 04:07:10.384846   27997 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3903/.minikube/cert.pem, removing ...
	I0128 04:07:10.384850   27997 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3903/.minikube/cert.pem
	I0128 04:07:10.384865   27997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3903/.minikube/cert.pem (1123 bytes)
	I0128 04:07:10.384948   27997 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca-key.pem org=jenkins.pause-539738 san=[192.168.61.35 192.168.61.35 localhost 127.0.0.1 minikube pause-539738]
	I0128 04:07:10.437439   27997 provision.go:172] copyRemoteCerts
	I0128 04:07:10.437503   27997 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 04:07:10.437530   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:07:10.440905   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.441166   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:10.441220   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.441420   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:07:10.441601   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:10.441750   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:07:10.441905   27997 sshutil.go:53] new ssh client: &{IP:192.168.61.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/pause-539738/id_rsa Username:docker}
	I0128 04:07:10.542832   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0128 04:07:10.566601   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 04:07:10.592169   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0128 04:07:10.615708   27997 provision.go:86] duration metric: configureAuth took 241.505382ms
	I0128 04:07:10.615735   27997 buildroot.go:189] setting minikube options for container-runtime
	I0128 04:07:10.615952   27997 config.go:180] Loaded profile config "pause-539738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 04:07:10.615978   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:07:10.616259   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:07:10.618855   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.619294   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:10.619324   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.619547   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:07:10.619724   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:10.619896   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:10.620061   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:07:10.620230   27997 main.go:141] libmachine: Using SSH client type: native
	I0128 04:07:10.620420   27997 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.61.35 22 <nil> <nil>}
	I0128 04:07:10.620436   27997 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 04:07:10.754589   27997 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0128 04:07:10.754617   27997 buildroot.go:70] root file system type: tmpfs
	I0128 04:07:10.754845   27997 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 04:07:10.754881   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:07:10.757964   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.758431   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:10.758489   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.758712   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:07:10.758955   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:10.759142   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:10.759352   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:07:10.759578   27997 main.go:141] libmachine: Using SSH client type: native
	I0128 04:07:10.759778   27997 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.61.35 22 <nil> <nil>}
	I0128 04:07:10.759876   27997 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 04:07:10.921611   27997 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 04:07:10.921666   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:07:10.925578   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.926092   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:10.926117   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:10.926274   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:07:10.926464   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:10.926651   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:10.926817   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:07:10.926974   27997 main.go:141] libmachine: Using SSH client type: native
	I0128 04:07:10.927250   27997 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.61.35 22 <nil> <nil>}
	I0128 04:07:10.927277   27997 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 04:07:11.085552   27997 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 04:07:11.085578   27997 machine.go:91] provisioned docker machine in 1.006318874s
	I0128 04:07:11.085589   27997 start.go:300] post-start starting for "pause-539738" (driver="kvm2")
	I0128 04:07:11.085597   27997 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 04:07:11.085619   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:07:11.085888   27997 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 04:07:11.085910   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:07:11.088616   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:11.089029   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:11.089065   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:11.089224   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:07:11.089414   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:11.089670   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:07:11.089840   27997 sshutil.go:53] new ssh client: &{IP:192.168.61.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/pause-539738/id_rsa Username:docker}
	I0128 04:07:11.186483   27997 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 04:07:11.190674   27997 info.go:137] Remote host: Buildroot 2021.02.12
	I0128 04:07:11.190693   27997 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3903/.minikube/addons for local assets ...
	I0128 04:07:11.190768   27997 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3903/.minikube/files for local assets ...
	I0128 04:07:11.190868   27997 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3903/.minikube/files/etc/ssl/certs/110622.pem -> 110622.pem in /etc/ssl/certs
	I0128 04:07:11.190964   27997 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 04:07:11.200432   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/files/etc/ssl/certs/110622.pem --> /etc/ssl/certs/110622.pem (1708 bytes)
	I0128 04:07:11.230125   27997 start.go:303] post-start completed in 144.523816ms
	I0128 04:07:11.230151   27997 fix.go:57] fixHost completed within 1.178436565s
	I0128 04:07:11.230174   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:07:11.232803   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:11.233163   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:11.233195   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:11.233356   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:07:11.233529   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:11.233711   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:11.233852   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:07:11.234036   27997 main.go:141] libmachine: Using SSH client type: native
	I0128 04:07:11.234191   27997 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.61.35 22 <nil> <nil>}
	I0128 04:07:11.234207   27997 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0128 04:07:11.365244   27997 main.go:141] libmachine: SSH cmd err, output: <nil>: 1674878831.360445100
	
	I0128 04:07:11.365268   27997 fix.go:207] guest clock: 1674878831.360445100
	I0128 04:07:11.365278   27997 fix.go:220] Guest: 2023-01-28 04:07:11.3604451 +0000 UTC Remote: 2023-01-28 04:07:11.230155088 +0000 UTC m=+19.420292613 (delta=130.290012ms)
	I0128 04:07:11.365317   27997 fix.go:191] guest clock delta is within tolerance: 130.290012ms
	I0128 04:07:11.365324   27997 start.go:83] releasing machines lock for "pause-539738", held for 1.31364094s
	I0128 04:07:11.365355   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:07:11.365709   27997 main.go:141] libmachine: (pause-539738) Calling .GetIP
	I0128 04:07:11.369276   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:11.369631   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:11.369659   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:11.369898   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:07:11.370472   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:07:11.371104   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:07:11.371190   27997 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 04:07:11.371231   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:07:11.371504   27997 ssh_runner.go:195] Run: cat /version.json
	I0128 04:07:11.371526   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:07:11.374155   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:11.374497   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:11.374527   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:11.374679   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:07:11.374751   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:11.374855   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:11.375038   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:07:11.375090   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:11.375111   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:11.375164   27997 sshutil.go:53] new ssh client: &{IP:192.168.61.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/pause-539738/id_rsa Username:docker}
	I0128 04:07:11.375471   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:07:11.375627   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:07:11.375763   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:07:11.375876   27997 sshutil.go:53] new ssh client: &{IP:192.168.61.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/pause-539738/id_rsa Username:docker}
	W0128 04:07:11.496035   27997 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
	I0128 04:07:11.496119   27997 ssh_runner.go:195] Run: systemctl --version
	I0128 04:07:11.502001   27997 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0128 04:07:11.507047   27997 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0128 04:07:11.507144   27997 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 04:07:11.515125   27997 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 04:07:11.532479   27997 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0128 04:07:11.539590   27997 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0128 04:07:11.539607   27997 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 04:07:11.539683   27997 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 04:07:11.566375   27997 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 04:07:11.566400   27997 docker.go:560] Images already preloaded, skipping extraction
	I0128 04:07:11.566408   27997 start.go:472] detecting cgroup driver to use...
	I0128 04:07:11.566548   27997 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 04:07:11.584706   27997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 04:07:11.593495   27997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 04:07:11.601887   27997 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 04:07:11.601933   27997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 04:07:11.610332   27997 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 04:07:11.619154   27997 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 04:07:11.627899   27997 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 04:07:11.636305   27997 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 04:07:11.645003   27997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 04:07:11.654510   27997 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 04:07:11.662747   27997 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 04:07:11.670307   27997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 04:07:11.808558   27997 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 04:07:11.829814   27997 start.go:472] detecting cgroup driver to use...
	I0128 04:07:11.829933   27997 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 04:07:11.843892   27997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0128 04:07:11.856397   27997 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0128 04:07:11.874925   27997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0128 04:07:11.887541   27997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 04:07:11.899369   27997 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 04:07:11.916587   27997 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 04:07:12.085809   27997 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 04:07:12.274789   27997 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 04:07:12.274821   27997 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 04:07:12.296101   27997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 04:07:12.479644   27997 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 04:07:22.084795   27997 ssh_runner.go:235] Completed: sudo systemctl restart docker: (9.60510776s)
	I0128 04:07:22.084864   27997 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 04:07:22.222311   27997 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0128 04:07:22.342302   27997 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 04:07:22.465783   27997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 04:07:22.584822   27997 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0128 04:07:22.609281   27997 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0128 04:07:22.609353   27997 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0128 04:07:22.617915   27997 start.go:540] Will wait 60s for crictl version
	I0128 04:07:22.617982   27997 ssh_runner.go:195] Run: which crictl
	I0128 04:07:22.621783   27997 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0128 04:07:22.741693   27997 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0128 04:07:22.741778   27997 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 04:07:22.773223   27997 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 04:07:22.804897   27997 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0128 04:07:22.804951   27997 main.go:141] libmachine: (pause-539738) Calling .GetIP
	I0128 04:07:22.807812   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:22.808250   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:07:22.808278   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:07:22.808575   27997 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0128 04:07:22.812730   27997 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 04:07:22.812814   27997 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 04:07:22.841042   27997 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 04:07:22.841067   27997 docker.go:560] Images already preloaded, skipping extraction
	I0128 04:07:22.841127   27997 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 04:07:22.871553   27997 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 04:07:22.871575   27997 cache_images.go:84] Images are preloaded, skipping loading
	I0128 04:07:22.871644   27997 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 04:07:22.907171   27997 cni.go:84] Creating CNI manager for ""
	I0128 04:07:22.907204   27997 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 04:07:22.907217   27997 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 04:07:22.907237   27997 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.35 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-539738 NodeName:pause-539738 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 04:07:22.907465   27997 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-539738"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.35
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.35"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 04:07:22.907557   27997 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-539738 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:pause-539738 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 04:07:22.907603   27997 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0128 04:07:22.918642   27997 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 04:07:22.918703   27997 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 04:07:22.928500   27997 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (445 bytes)
	I0128 04:07:22.944942   27997 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 04:07:22.959687   27997 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2091 bytes)
	I0128 04:07:22.973809   27997 ssh_runner.go:195] Run: grep 192.168.61.35	control-plane.minikube.internal$ /etc/hosts
	I0128 04:07:22.977071   27997 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738 for IP: 192.168.61.35
	I0128 04:07:22.977101   27997 certs.go:186] acquiring lock for shared ca certs: {Name:mkfc8928307a3e2907546b08aba44a06c8ae27b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 04:07:22.977256   27997 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3903/.minikube/ca.key
	I0128 04:07:22.977304   27997 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3903/.minikube/proxy-client-ca.key
	I0128 04:07:22.977396   27997 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.key
	I0128 04:07:22.977452   27997 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/apiserver.key.78a35614
	I0128 04:07:22.977497   27997 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/proxy-client.key
	I0128 04:07:22.977647   27997 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/home/jenkins/minikube-integration/15565-3903/.minikube/certs/11062.pem (1338 bytes)
	W0128 04:07:22.977695   27997 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3903/.minikube/certs/home/jenkins/minikube-integration/15565-3903/.minikube/certs/11062_empty.pem, impossibly tiny 0 bytes
	I0128 04:07:22.977710   27997 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca-key.pem (1679 bytes)
	I0128 04:07:22.977745   27997 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem (1078 bytes)
	I0128 04:07:22.977777   27997 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/home/jenkins/minikube-integration/15565-3903/.minikube/certs/cert.pem (1123 bytes)
	I0128 04:07:22.977812   27997 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/home/jenkins/minikube-integration/15565-3903/.minikube/certs/key.pem (1679 bytes)
	I0128 04:07:22.977863   27997 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3903/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3903/.minikube/files/etc/ssl/certs/110622.pem (1708 bytes)
	I0128 04:07:22.978518   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 04:07:23.002077   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0128 04:07:23.023192   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 04:07:23.042947   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0128 04:07:23.064664   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 04:07:23.086738   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0128 04:07:23.107915   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 04:07:23.128757   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0128 04:07:23.155496   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/files/etc/ssl/certs/110622.pem --> /usr/share/ca-certificates/110622.pem (1708 bytes)
	I0128 04:07:23.189965   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 04:07:23.218608   27997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/certs/11062.pem --> /usr/share/ca-certificates/11062.pem (1338 bytes)
	I0128 04:07:23.243642   27997 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0128 04:07:23.259525   27997 ssh_runner.go:195] Run: openssl version
	I0128 04:07:23.265807   27997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110622.pem && ln -fs /usr/share/ca-certificates/110622.pem /etc/ssl/certs/110622.pem"
	I0128 04:07:23.276556   27997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110622.pem
	I0128 04:07:23.281400   27997 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 03:36 /usr/share/ca-certificates/110622.pem
	I0128 04:07:23.281442   27997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110622.pem
	I0128 04:07:23.287538   27997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110622.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 04:07:23.296577   27997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 04:07:23.305500   27997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 04:07:23.309495   27997 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 03:31 /usr/share/ca-certificates/minikubeCA.pem
	I0128 04:07:23.309541   27997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 04:07:23.315232   27997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 04:07:23.325140   27997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11062.pem && ln -fs /usr/share/ca-certificates/11062.pem /etc/ssl/certs/11062.pem"
	I0128 04:07:23.335667   27997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11062.pem
	I0128 04:07:23.339835   27997 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 03:36 /usr/share/ca-certificates/11062.pem
	I0128 04:07:23.339876   27997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11062.pem
	I0128 04:07:23.345808   27997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11062.pem /etc/ssl/certs/51391683.0"
	I0128 04:07:23.355626   27997 kubeadm.go:401] StartCluster: {Name:pause-539738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
26.1 ClusterName:pause-539738 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.35 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0128 04:07:23.355746   27997 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 04:07:23.381174   27997 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 04:07:23.388660   27997 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0128 04:07:23.388678   27997 kubeadm.go:633] restartCluster start
	I0128 04:07:23.388732   27997 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0128 04:07:23.396510   27997 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0128 04:07:23.397188   27997 kubeconfig.go:92] found "pause-539738" server: "https://192.168.61.35:8443"
	I0128 04:07:23.398039   27997 kapi.go:59] client config for pause-539738: &rest.Config{Host:"https://192.168.61.35:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 04:07:23.398675   27997 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0128 04:07:23.405660   27997 api_server.go:165] Checking apiserver status ...
	I0128 04:07:23.405695   27997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 04:07:23.415594   27997 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 04:07:23.916273   27997 api_server.go:165] Checking apiserver status ...
	I0128 04:07:23.916347   27997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 04:07:23.934891   27997 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 04:07:24.416268   27997 api_server.go:165] Checking apiserver status ...
	I0128 04:07:24.416342   27997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 04:07:24.431806   27997 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 04:07:24.916437   27997 api_server.go:165] Checking apiserver status ...
	I0128 04:07:24.916518   27997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 04:07:24.951560   27997 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 04:07:25.416560   27997 api_server.go:165] Checking apiserver status ...
	I0128 04:07:25.416685   27997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 04:07:25.450917   27997 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 04:07:25.916559   27997 api_server.go:165] Checking apiserver status ...
	I0128 04:07:25.916644   27997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 04:07:25.952801   27997 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6377/cgroup
	I0128 04:07:25.976604   27997 api_server.go:181] apiserver freezer: "5:freezer:/kubepods/burstable/pod4c005079034d4260419174f910cde40c/7a3e62c8e65a3059900ba9379b58976c9d6969c725f4ea58a30262fb8d1acbb1"
	I0128 04:07:25.976682   27997 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4c005079034d4260419174f910cde40c/7a3e62c8e65a3059900ba9379b58976c9d6969c725f4ea58a30262fb8d1acbb1/freezer.state
	I0128 04:07:26.007889   27997 api_server.go:203] freezer state: "THAWED"
	I0128 04:07:26.007922   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:31.009105   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0128 04:07:31.009183   27997 retry.go:31] will retry after 263.082536ms: state is "Stopped"
	I0128 04:07:31.272341   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:36.273461   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0128 04:07:36.273520   27997 retry.go:31] will retry after 381.329545ms: state is "Stopped"
	I0128 04:07:36.655044   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:41.655549   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0128 04:07:41.655599   27997 api_server.go:165] Checking apiserver status ...
	I0128 04:07:41.655647   27997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 04:07:41.668096   27997 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6377/cgroup
	I0128 04:07:41.677333   27997 api_server.go:181] apiserver freezer: "5:freezer:/kubepods/burstable/pod4c005079034d4260419174f910cde40c/7a3e62c8e65a3059900ba9379b58976c9d6969c725f4ea58a30262fb8d1acbb1"
	I0128 04:07:41.677405   27997 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4c005079034d4260419174f910cde40c/7a3e62c8e65a3059900ba9379b58976c9d6969c725f4ea58a30262fb8d1acbb1/freezer.state
	I0128 04:07:41.686705   27997 api_server.go:203] freezer state: "THAWED"
	I0128 04:07:41.686725   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:46.544675   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": read tcp 192.168.61.1:47184->192.168.61.35:8443: read: connection reset by peer
	I0128 04:07:46.544732   27997 retry.go:31] will retry after 242.214273ms: state is "Stopped"
	I0128 04:07:46.788000   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:46.788534   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": dial tcp 192.168.61.35:8443: connect: connection refused
	I0128 04:07:46.788569   27997 retry.go:31] will retry after 300.724609ms: state is "Stopped"
	I0128 04:07:47.089841   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:47.090469   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": dial tcp 192.168.61.35:8443: connect: connection refused
	I0128 04:07:47.090504   27997 retry.go:31] will retry after 427.113882ms: state is "Stopped"
	I0128 04:07:47.518052   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:47.518614   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": dial tcp 192.168.61.35:8443: connect: connection refused
	I0128 04:07:47.518656   27997 retry.go:31] will retry after 382.2356ms: state is "Stopped"
	I0128 04:07:47.901087   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:47.901612   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": dial tcp 192.168.61.35:8443: connect: connection refused
	I0128 04:07:47.901653   27997 retry.go:31] will retry after 505.529557ms: state is "Stopped"
	I0128 04:07:48.407292   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:48.407893   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": dial tcp 192.168.61.35:8443: connect: connection refused
	I0128 04:07:48.407935   27997 retry.go:31] will retry after 609.195524ms: state is "Stopped"
	I0128 04:07:49.017283   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:49.017963   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": dial tcp 192.168.61.35:8443: connect: connection refused
	I0128 04:07:49.018005   27997 retry.go:31] will retry after 858.741692ms: state is "Stopped"
	I0128 04:07:49.876955   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:49.877615   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": dial tcp 192.168.61.35:8443: connect: connection refused
	I0128 04:07:49.877654   27997 retry.go:31] will retry after 1.201160326s: state is "Stopped"
	I0128 04:07:51.079341   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:51.079978   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": dial tcp 192.168.61.35:8443: connect: connection refused
	I0128 04:07:51.080012   27997 retry.go:31] will retry after 1.723796097s: state is "Stopped"
	I0128 04:07:52.804593   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:52.805228   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": dial tcp 192.168.61.35:8443: connect: connection refused
	I0128 04:07:52.805274   27997 retry.go:31] will retry after 1.596532639s: state is "Stopped"
	I0128 04:07:54.401903   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:54.402514   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": dial tcp 192.168.61.35:8443: connect: connection refused
	I0128 04:07:54.402555   27997 retry.go:31] will retry after 2.189373114s: state is "Stopped"
	I0128 04:07:56.592272   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:07:56.593053   27997 api_server.go:268] stopped: https://192.168.61.35:8443/healthz: Get "https://192.168.61.35:8443/healthz": dial tcp 192.168.61.35:8443: connect: connection refused
	I0128 04:07:56.593109   27997 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0128 04:07:56.593127   27997 kubeadm.go:1120] stopping kube-system containers ...
	I0128 04:07:56.593189   27997 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 04:07:56.632861   27997 docker.go:456] Stopping containers: [f4d02970c201 689f2394c859 ecd079acd243 a247c449d214 7a3e62c8e65a be6ac4b35350 29628336e08f 4cbfacd312e5 b1adfd0dc97e 3c2eca72a2a1 ae418143b0c2 f537afa7d5fe 1b2ee16eff68 2bfa37963896 25643353b1f1 1e035943f54e 3a9fca0efa20 4e7d7d2ab2ef aa5be7efa147 e893ce8e3664 b1dc70f05bf1]
	I0128 04:07:56.632939   27997 ssh_runner.go:195] Run: docker stop f4d02970c201 689f2394c859 ecd079acd243 a247c449d214 7a3e62c8e65a be6ac4b35350 29628336e08f 4cbfacd312e5 b1adfd0dc97e 3c2eca72a2a1 ae418143b0c2 f537afa7d5fe 1b2ee16eff68 2bfa37963896 25643353b1f1 1e035943f54e 3a9fca0efa20 4e7d7d2ab2ef aa5be7efa147 e893ce8e3664 b1dc70f05bf1
	I0128 04:08:02.209857   27997 ssh_runner.go:235] Completed: docker stop f4d02970c201 689f2394c859 ecd079acd243 a247c449d214 7a3e62c8e65a be6ac4b35350 29628336e08f 4cbfacd312e5 b1adfd0dc97e 3c2eca72a2a1 ae418143b0c2 f537afa7d5fe 1b2ee16eff68 2bfa37963896 25643353b1f1 1e035943f54e 3a9fca0efa20 4e7d7d2ab2ef aa5be7efa147 e893ce8e3664 b1dc70f05bf1: (5.576878582s)
	I0128 04:08:02.209933   27997 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0128 04:08:02.247471   27997 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 04:08:02.257670   27997 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan 28 04:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jan 28 04:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jan 28 04:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jan 28 04:05 /etc/kubernetes/scheduler.conf
	
	I0128 04:08:02.257745   27997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0128 04:08:02.266543   27997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0128 04:08:02.274976   27997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0128 04:08:02.283501   27997 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 04:08:02.283552   27997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0128 04:08:02.292760   27997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0128 04:08:02.302621   27997 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 04:08:02.302683   27997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0128 04:08:02.312080   27997 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 04:08:02.321079   27997 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0128 04:08:02.321105   27997 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 04:08:02.395807   27997 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 04:08:03.408018   27997 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.012177089s)
	I0128 04:08:03.408056   27997 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0128 04:08:03.628843   27997 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 04:08:03.697191   27997 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0128 04:08:03.770388   27997 api_server.go:51] waiting for apiserver process to appear ...
	I0128 04:08:03.770460   27997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 04:08:03.789208   27997 api_server.go:71] duration metric: took 18.825457ms to wait for apiserver process to appear ...
	I0128 04:08:03.789238   27997 api_server.go:87] waiting for apiserver healthz status ...
	I0128 04:08:03.789251   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:08:08.776672   27997 api_server.go:278] https://192.168.61.35:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0128 04:08:08.776706   27997 api_server.go:102] status: https://192.168.61.35:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0128 04:08:09.277406   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:08:09.285927   27997 api_server.go:278] https://192.168.61.35:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 04:08:09.285958   27997 api_server.go:102] status: https://192.168.61.35:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 04:08:09.777629   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:08:09.783599   27997 api_server.go:278] https://192.168.61.35:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 04:08:09.783626   27997 api_server.go:102] status: https://192.168.61.35:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 04:08:10.277359   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:08:10.281803   27997 api_server.go:278] https://192.168.61.35:8443/healthz returned 200:
	ok
	I0128 04:08:10.288366   27997 api_server.go:140] control plane version: v1.26.1
	I0128 04:08:10.288387   27997 api_server.go:130] duration metric: took 6.499142294s to wait for apiserver health ...
	I0128 04:08:10.288397   27997 cni.go:84] Creating CNI manager for ""
	I0128 04:08:10.288412   27997 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 04:08:10.374186   27997 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0128 04:08:10.517727   27997 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0128 04:08:10.527990   27997 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0128 04:08:10.546148   27997 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 04:08:10.569616   27997 system_pods.go:59] 6 kube-system pods found
	I0128 04:08:10.569649   27997 system_pods.go:61] "coredns-787d4945fb-jvdr8" [9d5d58d3-36c6-44d2-bf2d-2297c435af12] Running
	I0128 04:08:10.569662   27997 system_pods.go:61] "etcd-pause-539738" [4e925a1f-e8e7-463f-9ca5-30f3bcf9e034] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0128 04:08:10.569672   27997 system_pods.go:61] "kube-apiserver-pause-539738" [b89c18b3-bea5-480d-8059-6f1909701f9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0128 04:08:10.569679   27997 system_pods.go:61] "kube-controller-manager-pause-539738" [6a7def17-49f7-49d3-9bc6-94c176e59887] Running
	I0128 04:08:10.569686   27997 system_pods.go:61] "kube-proxy-2vxmw" [f0971d3d-f13f-421d-a7db-fa18ee862abb] Running
	I0128 04:08:10.569694   27997 system_pods.go:61] "kube-scheduler-pause-539738" [bf3dd75f-9d11-4088-8afc-6e0200586918] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0128 04:08:10.569719   27997 system_pods.go:74] duration metric: took 23.548103ms to wait for pod list to return data ...
	I0128 04:08:10.569728   27997 node_conditions.go:102] verifying NodePressure condition ...
	I0128 04:08:10.598228   27997 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0128 04:08:10.598258   27997 node_conditions.go:123] node cpu capacity is 2
	I0128 04:08:10.598281   27997 node_conditions.go:105] duration metric: took 28.544189ms to run NodePressure ...
	I0128 04:08:10.598300   27997 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 04:08:10.890382   27997 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0128 04:08:10.897072   27997 kubeadm.go:784] kubelet initialised
	I0128 04:08:10.897098   27997 kubeadm.go:785] duration metric: took 6.685746ms waiting for restarted kubelet to initialise ...
	I0128 04:08:10.897108   27997 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 04:08:10.901587   27997 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-jvdr8" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:10.907007   27997 pod_ready.go:92] pod "coredns-787d4945fb-jvdr8" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:10.907022   27997 pod_ready.go:81] duration metric: took 5.407394ms waiting for pod "coredns-787d4945fb-jvdr8" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:10.907029   27997 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:12.917951   27997 pod_ready.go:102] pod "etcd-pause-539738" in "kube-system" namespace has status "Ready":"False"
	I0128 04:08:14.918143   27997 pod_ready.go:102] pod "etcd-pause-539738" in "kube-system" namespace has status "Ready":"False"
	I0128 04:08:16.920223   27997 pod_ready.go:102] pod "etcd-pause-539738" in "kube-system" namespace has status "Ready":"False"
	I0128 04:08:17.418837   27997 pod_ready.go:92] pod "etcd-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:17.418866   27997 pod_ready.go:81] duration metric: took 6.511830933s waiting for pod "etcd-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:17.418877   27997 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:17.423714   27997 pod_ready.go:92] pod "kube-apiserver-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:17.423733   27997 pod_ready.go:81] duration metric: took 4.846452ms waiting for pod "kube-apiserver-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:17.423741   27997 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.437255   27997 pod_ready.go:102] pod "kube-controller-manager-pause-539738" in "kube-system" namespace has status "Ready":"False"
	I0128 04:08:19.936646   27997 pod_ready.go:92] pod "kube-controller-manager-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:19.936678   27997 pod_ready.go:81] duration metric: took 2.512929249s waiting for pod "kube-controller-manager-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.936691   27997 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2vxmw" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.944738   27997 pod_ready.go:92] pod "kube-proxy-2vxmw" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:19.944761   27997 pod_ready.go:81] duration metric: took 8.062252ms waiting for pod "kube-proxy-2vxmw" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.944774   27997 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.951114   27997 pod_ready.go:92] pod "kube-scheduler-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:19.951132   27997 pod_ready.go:81] duration metric: took 6.350074ms waiting for pod "kube-scheduler-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.951141   27997 pod_ready.go:38] duration metric: took 9.054023106s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 04:08:19.951158   27997 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0128 04:08:19.965058   27997 ops.go:34] apiserver oom_adj: -16
	I0128 04:08:19.965079   27997 kubeadm.go:637] restartCluster took 56.576394153s
	I0128 04:08:19.965086   27997 kubeadm.go:403] StartCluster complete in 56.609465724s
	I0128 04:08:19.965103   27997 settings.go:142] acquiring lock: {Name:mkba6eafa5830ee298eee339d43ce981c09fcd93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 04:08:19.965179   27997 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3903/kubeconfig
	I0128 04:08:19.966017   27997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3903/kubeconfig: {Name:mk6d09a9ae49503096fa4914dc61ac689beebb9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 04:08:19.966241   27997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0128 04:08:19.966328   27997 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0128 04:08:19.966408   27997 addons.go:65] Setting storage-provisioner=true in profile "pause-539738"
	I0128 04:08:19.966411   27997 config.go:180] Loaded profile config "pause-539738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 04:08:19.966414   27997 addons.go:65] Setting default-storageclass=true in profile "pause-539738"
	I0128 04:08:19.966426   27997 addons.go:227] Setting addon storage-provisioner=true in "pause-539738"
	W0128 04:08:19.966434   27997 addons.go:236] addon storage-provisioner should already be in state true
	I0128 04:08:19.966439   27997 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-539738"
	I0128 04:08:19.966502   27997 host.go:66] Checking if "pause-539738" exists ...
	I0128 04:08:19.966819   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:19.966855   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:19.966856   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:19.966902   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:19.967133   27997 kapi.go:59] client config for pause-539738: &rest.Config{Host:"https://192.168.61.35:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 04:08:19.970013   27997 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-539738" context rescaled to 1 replicas
	I0128 04:08:19.970046   27997 start.go:221] Will wait 6m0s for node &{Name: IP:192.168.61.35 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 04:08:19.972097   27997 out.go:177] * Verifying Kubernetes components...
	I0128 04:08:19.973640   27997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 04:08:19.982426   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33553
	I0128 04:08:19.982827   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:19.983285   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:19.983307   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:19.983640   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:19.983915   27997 main.go:141] libmachine: (pause-539738) Calling .GetState
	I0128 04:08:19.986373   27997 kapi.go:59] client config for pause-539738: &rest.Config{Host:"https://192.168.61.35:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 04:08:19.988123   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40279
	I0128 04:08:19.988506   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:19.989003   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:19.989020   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:19.989470   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:19.989813   27997 addons.go:227] Setting addon default-storageclass=true in "pause-539738"
	W0128 04:08:19.989824   27997 addons.go:236] addon default-storageclass should already be in state true
	I0128 04:08:19.989845   27997 host.go:66] Checking if "pause-539738" exists ...
	I0128 04:08:19.990068   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:19.990081   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:19.990504   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:19.990529   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:20.010224   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0128 04:08:20.010819   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:20.011457   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:20.011479   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:20.014118   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34355
	I0128 04:08:20.014312   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:20.014460   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:20.014899   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:20.014918   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:20.015006   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:20.015042   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:20.015420   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:20.015683   27997 main.go:141] libmachine: (pause-539738) Calling .GetState
	I0128 04:08:20.017689   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:08:20.020135   27997 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0128 04:08:20.021649   27997 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 04:08:20.021666   27997 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0128 04:08:20.021683   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:08:20.024866   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:08:20.025519   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:08:20.025545   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:08:20.025726   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:08:20.025894   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:08:20.026077   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:08:20.026226   27997 sshutil.go:53] new ssh client: &{IP:192.168.61.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/pause-539738/id_rsa Username:docker}
	I0128 04:08:20.033882   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0128 04:08:20.034220   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:20.034646   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:20.034662   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:20.034945   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:20.035142   27997 main.go:141] libmachine: (pause-539738) Calling .GetState
	I0128 04:08:20.036874   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:08:20.037194   27997 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0128 04:08:20.037218   27997 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0128 04:08:20.037237   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:08:20.040521   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:08:20.041037   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:08:20.041057   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:08:20.041208   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:08:20.041364   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:08:20.041504   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:08:20.041604   27997 sshutil.go:53] new ssh client: &{IP:192.168.61.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/pause-539738/id_rsa Username:docker}
	I0128 04:08:20.139989   27997 start.go:881] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0128 04:08:20.140046   27997 node_ready.go:35] waiting up to 6m0s for node "pause-539738" to be "Ready" ...
	I0128 04:08:20.143009   27997 node_ready.go:49] node "pause-539738" has status "Ready":"True"
	I0128 04:08:20.143027   27997 node_ready.go:38] duration metric: took 2.970545ms waiting for node "pause-539738" to be "Ready" ...
	I0128 04:08:20.143034   27997 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 04:08:20.148143   27997 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-jvdr8" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:20.171291   27997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 04:08:20.194995   27997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0128 04:08:20.216817   27997 pod_ready.go:92] pod "coredns-787d4945fb-jvdr8" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:20.216845   27997 pod_ready.go:81] duration metric: took 68.682415ms waiting for pod "coredns-787d4945fb-jvdr8" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:20.216857   27997 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:20.616097   27997 pod_ready.go:92] pod "etcd-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:20.616171   27997 pod_ready.go:81] duration metric: took 399.304996ms waiting for pod "etcd-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:20.616192   27997 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:21.029453   27997 pod_ready.go:92] pod "kube-apiserver-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:21.029481   27997 pod_ready.go:81] duration metric: took 413.271931ms waiting for pod "kube-apiserver-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:21.029497   27997 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:21.585474   27997 pod_ready.go:92] pod "kube-controller-manager-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:21.585504   27997 pod_ready.go:81] duration metric: took 555.998841ms waiting for pod "kube-controller-manager-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:21.585519   27997 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2vxmw" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:22.194411   27997 pod_ready.go:92] pod "kube-proxy-2vxmw" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:22.194435   27997 pod_ready.go:81] duration metric: took 608.908313ms waiting for pod "kube-proxy-2vxmw" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:22.194447   27997 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:22.270839   27997 pod_ready.go:92] pod "kube-scheduler-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:22.270869   27997 pod_ready.go:81] duration metric: took 76.409295ms waiting for pod "kube-scheduler-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:22.270881   27997 pod_ready.go:38] duration metric: took 2.127838794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 04:08:22.270907   27997 api_server.go:51] waiting for apiserver process to appear ...
	I0128 04:08:22.270958   27997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 04:08:22.523676   27997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.352347256s)
	I0128 04:08:22.523720   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.523733   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.523819   27997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.328799927s)
	I0128 04:08:22.523832   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.523840   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.523883   27997 api_server.go:71] duration metric: took 2.553813709s to wait for apiserver process to appear ...
	I0128 04:08:22.523890   27997 api_server.go:87] waiting for apiserver healthz status ...
	I0128 04:08:22.523901   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:08:22.527469   27997 main.go:141] libmachine: (pause-539738) DBG | Closing plugin on server side
	I0128 04:08:22.527525   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.527543   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.527552   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.527566   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.527575   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.527590   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.527579   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.527652   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.527673   27997 main.go:141] libmachine: (pause-539738) DBG | Closing plugin on server side
	I0128 04:08:22.527862   27997 main.go:141] libmachine: (pause-539738) DBG | Closing plugin on server side
	I0128 04:08:22.527907   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.527918   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.527942   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.527955   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.527971   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.527981   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.528224   27997 main.go:141] libmachine: (pause-539738) DBG | Closing plugin on server side
	I0128 04:08:22.528266   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.528281   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.530083   27997 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0128 04:08:22.531478   27997 addons.go:488] enableAddons completed in 2.565153495s
	I0128 04:08:22.536235   27997 api_server.go:278] https://192.168.61.35:8443/healthz returned 200:
	ok
	I0128 04:08:22.546519   27997 api_server.go:140] control plane version: v1.26.1
	I0128 04:08:22.546542   27997 api_server.go:130] duration metric: took 22.645385ms to wait for apiserver health ...
	I0128 04:08:22.546567   27997 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 04:08:22.566601   27997 system_pods.go:59] 7 kube-system pods found
	I0128 04:08:22.566636   27997 system_pods.go:61] "coredns-787d4945fb-jvdr8" [9d5d58d3-36c6-44d2-bf2d-2297c435af12] Running
	I0128 04:08:22.566645   27997 system_pods.go:61] "etcd-pause-539738" [4e925a1f-e8e7-463f-9ca5-30f3bcf9e034] Running
	I0128 04:08:22.566652   27997 system_pods.go:61] "kube-apiserver-pause-539738" [b89c18b3-bea5-480d-8059-6f1909701f9b] Running
	I0128 04:08:22.566665   27997 system_pods.go:61] "kube-controller-manager-pause-539738" [6a7def17-49f7-49d3-9bc6-94c176e59887] Running
	I0128 04:08:22.566743   27997 system_pods.go:61] "kube-proxy-2vxmw" [f0971d3d-f13f-421d-a7db-fa18ee862abb] Running
	I0128 04:08:22.566750   27997 system_pods.go:61] "kube-scheduler-pause-539738" [bf3dd75f-9d11-4088-8afc-6e0200586918] Running
	I0128 04:08:22.566757   27997 system_pods.go:61] "storage-provisioner" [28af396f-4ec7-455c-afe3-469c018c0197] Pending
	I0128 04:08:22.566764   27997 system_pods.go:74] duration metric: took 20.191146ms to wait for pod list to return data ...
	I0128 04:08:22.566780   27997 default_sa.go:34] waiting for default service account to be created ...
	I0128 04:08:22.620765   27997 default_sa.go:45] found service account: "default"
	I0128 04:08:22.620791   27997 default_sa.go:55] duration metric: took 54.004254ms for default service account to be created ...
	I0128 04:08:22.620801   27997 system_pods.go:116] waiting for k8s-apps to be running ...
	I0128 04:08:22.820897   27997 system_pods.go:86] 7 kube-system pods found
	I0128 04:08:22.820980   27997 system_pods.go:89] "coredns-787d4945fb-jvdr8" [9d5d58d3-36c6-44d2-bf2d-2297c435af12] Running
	I0128 04:08:22.820994   27997 system_pods.go:89] "etcd-pause-539738" [4e925a1f-e8e7-463f-9ca5-30f3bcf9e034] Running
	I0128 04:08:22.821001   27997 system_pods.go:89] "kube-apiserver-pause-539738" [b89c18b3-bea5-480d-8059-6f1909701f9b] Running
	I0128 04:08:22.821009   27997 system_pods.go:89] "kube-controller-manager-pause-539738" [6a7def17-49f7-49d3-9bc6-94c176e59887] Running
	I0128 04:08:22.821026   27997 system_pods.go:89] "kube-proxy-2vxmw" [f0971d3d-f13f-421d-a7db-fa18ee862abb] Running
	I0128 04:08:22.821033   27997 system_pods.go:89] "kube-scheduler-pause-539738" [bf3dd75f-9d11-4088-8afc-6e0200586918] Running
	I0128 04:08:22.821048   27997 system_pods.go:89] "storage-provisioner" [28af396f-4ec7-455c-afe3-469c018c0197] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0128 04:08:22.821061   27997 system_pods.go:126] duration metric: took 200.254117ms to wait for k8s-apps to be running ...
	I0128 04:08:22.821072   27997 system_svc.go:44] waiting for kubelet service to be running ....
	I0128 04:08:22.821120   27997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 04:08:22.836608   27997 system_svc.go:56] duration metric: took 15.525635ms WaitForService to wait for kubelet.
	I0128 04:08:22.836632   27997 kubeadm.go:578] duration metric: took 2.866561868s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0128 04:08:22.836651   27997 node_conditions.go:102] verifying NodePressure condition ...
	I0128 04:08:23.017898   27997 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0128 04:08:23.017942   27997 node_conditions.go:123] node cpu capacity is 2
	I0128 04:08:23.017956   27997 node_conditions.go:105] duration metric: took 181.298919ms to run NodePressure ...
	I0128 04:08:23.017971   27997 start.go:226] waiting for startup goroutines ...
	I0128 04:08:23.018318   27997 ssh_runner.go:195] Run: rm -f paused
	I0128 04:08:23.106272   27997 start.go:538] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0128 04:08:23.108522   27997 out.go:177] * Done! kubectl is now configured to use "pause-539738" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-539738 -n pause-539738
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-539738 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-539738 logs -n 25: (2.035888817s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-877541 sudo find     | cilium-877541             | jenkins | v1.28.0 | 28 Jan 23 04:04 UTC |                     |
	|         | /etc/crio -type f -exec sh -c  |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;           |                           |         |         |                     |                     |
	| ssh     | -p cilium-877541 sudo crio     | cilium-877541             | jenkins | v1.28.0 | 28 Jan 23 04:04 UTC |                     |
	|         | config                         |                           |         |         |                     |                     |
	| delete  | -p cilium-877541               | cilium-877541             | jenkins | v1.28.0 | 28 Jan 23 04:04 UTC | 28 Jan 23 04:04 UTC |
	| start   | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:04 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:04 UTC | 28 Jan 23 04:06 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| delete  | -p offline-docker-466600       | offline-docker-466600     | jenkins | v1.28.0 | 28 Jan 23 04:05 UTC | 28 Jan 23 04:05 UTC |
	| start   | -p kubernetes-upgrade-994986   | kubernetes-upgrade-994986 | jenkins | v1.28.0 | 28 Jan 23 04:05 UTC | 28 Jan 23 04:06 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| start   | -p running-upgrade-482422      | running-upgrade-482422    | jenkins | v1.28.0 | 28 Jan 23 04:05 UTC | 28 Jan 23 04:07 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:06 UTC | 28 Jan 23 04:06 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:06 UTC | 28 Jan 23 04:06 UTC |
	| start   | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:06 UTC | 28 Jan 23 04:07 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	| start   | -p pause-539738                | pause-539738              | jenkins | v1.28.0 | 28 Jan 23 04:06 UTC | 28 Jan 23 04:08 UTC |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-994986   | kubernetes-upgrade-994986 | jenkins | v1.28.0 | 28 Jan 23 04:06 UTC | 28 Jan 23 04:07 UTC |
	| start   | -p kubernetes-upgrade-994986   | kubernetes-upgrade-994986 | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:08 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-482422      | running-upgrade-482422    | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:07 UTC |
	| ssh     | -p NoKubernetes-398207 sudo    | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| profile | list                           | minikube                  | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:07 UTC |
	| profile | list --output=json             | minikube                  | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:07 UTC |
	| stop    | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:07 UTC |
	| start   | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:07 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-398207 sudo    | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:08 UTC |
	| start   | -p kubernetes-upgrade-994986   | kubernetes-upgrade-994986 | jenkins | v1.28.0 | 28 Jan 23 04:08 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-994986   | kubernetes-upgrade-994986 | jenkins | v1.28.0 | 28 Jan 23 04:08 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-746602   | force-systemd-flag-746602 | jenkins | v1.28.0 | 28 Jan 23 04:08 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 04:08:18
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 04:08:18.049837   29125 out.go:296] Setting OutFile to fd 1 ...
	I0128 04:08:18.050033   29125 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 04:08:18.050043   29125 out.go:309] Setting ErrFile to fd 2...
	I0128 04:08:18.050047   29125 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 04:08:18.050159   29125 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3903/.minikube/bin
	I0128 04:08:18.050701   29125 out.go:303] Setting JSON to false
	I0128 04:08:18.051771   29125 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3049,"bootTime":1674875849,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0128 04:08:18.051837   29125 start.go:135] virtualization: kvm guest
	I0128 04:08:18.054429   29125 out.go:177] * [force-systemd-flag-746602] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0128 04:08:18.056140   29125 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 04:08:18.056048   29125 notify.go:220] Checking for updates...
	I0128 04:08:18.059470   29125 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 04:08:18.061698   29125 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
	I0128 04:08:18.063105   29125 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
	I0128 04:08:18.068538   29125 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0128 04:08:18.070009   29125 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 04:08:18.071858   29125 config.go:180] Loaded profile config "kubernetes-upgrade-994986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 04:08:18.071973   29125 config.go:180] Loaded profile config "pause-539738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 04:08:18.072042   29125 config.go:180] Loaded profile config "stopped-upgrade-426786": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0128 04:08:18.072088   29125 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 04:08:18.107312   29125 out.go:177] * Using the kvm2 driver based on user configuration
	I0128 04:08:18.108810   29125 start.go:296] selected driver: kvm2
	I0128 04:08:18.108826   29125 start.go:840] validating driver "kvm2" against <nil>
	I0128 04:08:18.108834   29125 start.go:851] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 04:08:18.109427   29125 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 04:08:18.109518   29125 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15565-3903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0128 04:08:18.127081   29125 install.go:137] /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2 version is 1.28.0
	I0128 04:08:18.127147   29125 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 04:08:18.127359   29125 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0128 04:08:18.127411   29125 cni.go:84] Creating CNI manager for ""
	I0128 04:08:18.127434   29125 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 04:08:18.127446   29125 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0128 04:08:18.127458   29125 start_flags.go:319] config:
	{Name:force-systemd-flag-746602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:force-systemd-flag-746602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0128 04:08:18.127576   29125 iso.go:125] acquiring lock: {Name:mkae097b889f6bf43a43f260cc80a114303c04bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 04:08:18.129789   29125 out.go:177] * Starting control plane node force-systemd-flag-746602 in cluster force-systemd-flag-746602
	I0128 04:08:18.131471   29125 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 04:08:18.131514   29125 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 04:08:18.131531   29125 cache.go:57] Caching tarball of preloaded images
	I0128 04:08:18.131612   29125 preload.go:174] Found /home/jenkins/minikube-integration/15565-3903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 04:08:18.131624   29125 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 04:08:18.131766   29125 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/force-systemd-flag-746602/config.json ...
	I0128 04:08:18.131798   29125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/force-systemd-flag-746602/config.json: {Name:mk5335befc04e0920c98065c21e75c80618fae12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 04:08:18.131942   29125 cache.go:193] Successfully downloaded all kic artifacts
	I0128 04:08:18.131979   29125 start.go:364] acquiring machines lock for force-systemd-flag-746602: {Name:mk7ecd094a2b41dd9dbc24234c685e9f8765e635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0128 04:08:18.132022   29125 start.go:368] acquired machines lock for "force-systemd-flag-746602" in 26.604µs
	I0128 04:08:18.132052   29125 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-746602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.26.1 ClusterName:force-systemd-flag-746602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 04:08:18.132140   29125 start.go:125] createHost starting for "" (driver="kvm2")
	I0128 04:08:16.920223   27997 pod_ready.go:102] pod "etcd-pause-539738" in "kube-system" namespace has status "Ready":"False"
	I0128 04:08:17.418837   27997 pod_ready.go:92] pod "etcd-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:17.418866   27997 pod_ready.go:81] duration metric: took 6.511830933s waiting for pod "etcd-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:17.418877   27997 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:17.423714   27997 pod_ready.go:92] pod "kube-apiserver-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:17.423733   27997 pod_ready.go:81] duration metric: took 4.846452ms waiting for pod "kube-apiserver-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:17.423741   27997 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.437255   27997 pod_ready.go:102] pod "kube-controller-manager-pause-539738" in "kube-system" namespace has status "Ready":"False"
	I0128 04:08:19.936646   27997 pod_ready.go:92] pod "kube-controller-manager-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:19.936678   27997 pod_ready.go:81] duration metric: took 2.512929249s waiting for pod "kube-controller-manager-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.936691   27997 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2vxmw" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.944738   27997 pod_ready.go:92] pod "kube-proxy-2vxmw" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:19.944761   27997 pod_ready.go:81] duration metric: took 8.062252ms waiting for pod "kube-proxy-2vxmw" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.944774   27997 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.951114   27997 pod_ready.go:92] pod "kube-scheduler-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:19.951132   27997 pod_ready.go:81] duration metric: took 6.350074ms waiting for pod "kube-scheduler-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.951141   27997 pod_ready.go:38] duration metric: took 9.054023106s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 04:08:19.951158   27997 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0128 04:08:19.965058   27997 ops.go:34] apiserver oom_adj: -16
	I0128 04:08:19.965079   27997 kubeadm.go:637] restartCluster took 56.576394153s
	I0128 04:08:19.965086   27997 kubeadm.go:403] StartCluster complete in 56.609465724s
	I0128 04:08:19.965103   27997 settings.go:142] acquiring lock: {Name:mkba6eafa5830ee298eee339d43ce981c09fcd93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 04:08:19.965179   27997 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3903/kubeconfig
	I0128 04:08:19.966017   27997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3903/kubeconfig: {Name:mk6d09a9ae49503096fa4914dc61ac689beebb9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 04:08:19.966241   27997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0128 04:08:19.966328   27997 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0128 04:08:19.966408   27997 addons.go:65] Setting storage-provisioner=true in profile "pause-539738"
	I0128 04:08:19.966411   27997 config.go:180] Loaded profile config "pause-539738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 04:08:19.966414   27997 addons.go:65] Setting default-storageclass=true in profile "pause-539738"
	I0128 04:08:19.966426   27997 addons.go:227] Setting addon storage-provisioner=true in "pause-539738"
	W0128 04:08:19.966434   27997 addons.go:236] addon storage-provisioner should already be in state true
	I0128 04:08:19.966439   27997 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-539738"
	I0128 04:08:19.966502   27997 host.go:66] Checking if "pause-539738" exists ...
	I0128 04:08:19.966819   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:19.966855   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:19.966856   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:19.966902   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:19.967133   27997 kapi.go:59] client config for pause-539738: &rest.Config{Host:"https://192.168.61.35:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 04:08:19.970013   27997 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-539738" context rescaled to 1 replicas
	I0128 04:08:19.970046   27997 start.go:221] Will wait 6m0s for node &{Name: IP:192.168.61.35 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 04:08:19.972097   27997 out.go:177] * Verifying Kubernetes components...
	I0128 04:08:19.973640   27997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 04:08:19.982426   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33553
	I0128 04:08:19.982827   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:19.983285   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:19.983307   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:19.983640   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:19.983915   27997 main.go:141] libmachine: (pause-539738) Calling .GetState
	I0128 04:08:19.986373   27997 kapi.go:59] client config for pause-539738: &rest.Config{Host:"https://192.168.61.35:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 04:08:19.988123   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40279
	I0128 04:08:19.988506   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:19.989003   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:19.989020   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:19.989470   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:19.989813   27997 addons.go:227] Setting addon default-storageclass=true in "pause-539738"
	W0128 04:08:19.989824   27997 addons.go:236] addon default-storageclass should already be in state true
	I0128 04:08:19.989845   27997 host.go:66] Checking if "pause-539738" exists ...
	I0128 04:08:19.990068   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:19.990081   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:19.990504   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:19.990529   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:20.010224   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0128 04:08:20.010819   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:20.011457   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:20.011479   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:20.014118   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34355
	I0128 04:08:20.014312   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:20.014460   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:20.014899   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:20.014918   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:20.015006   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:20.015042   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:20.015420   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:20.015683   27997 main.go:141] libmachine: (pause-539738) Calling .GetState
	I0128 04:08:20.017689   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:08:20.020135   27997 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0128 04:08:16.692913   29022 machine.go:88] provisioning docker machine ...
	I0128 04:08:16.692933   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:16.693126   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetMachineName
	I0128 04:08:16.693289   29022 buildroot.go:166] provisioning hostname "kubernetes-upgrade-994986"
	I0128 04:08:16.693312   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetMachineName
	I0128 04:08:16.693475   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:16.696303   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.696779   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:16.696808   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.696951   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:16.697099   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:16.697252   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:16.697388   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:16.697558   29022 main.go:141] libmachine: Using SSH client type: native
	I0128 04:08:16.697732   29022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.83.15 22 <nil> <nil>}
	I0128 04:08:16.697751   29022 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-994986 && echo "kubernetes-upgrade-994986" | sudo tee /etc/hostname
	I0128 04:08:16.818238   29022 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-994986
	
	I0128 04:08:16.818269   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:16.821186   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.821535   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:16.821563   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.821714   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:16.821903   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:16.822101   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:16.822295   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:16.822489   29022 main.go:141] libmachine: Using SSH client type: native
	I0128 04:08:16.822679   29022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.83.15 22 <nil> <nil>}
	I0128 04:08:16.822709   29022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-994986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-994986/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-994986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 04:08:16.931763   29022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 04:08:16.931793   29022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3903/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3903/.minikube}
	I0128 04:08:16.931818   29022 buildroot.go:174] setting up certificates
	I0128 04:08:16.931838   29022 provision.go:83] configureAuth start
	I0128 04:08:16.931855   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetMachineName
	I0128 04:08:16.932133   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetIP
	I0128 04:08:16.935197   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.935667   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:16.935696   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.935883   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:16.938259   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.938619   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:16.938641   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.938760   29022 provision.go:138] copyHostCerts
	I0128 04:08:16.938799   29022 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3903/.minikube/ca.pem, removing ...
	I0128 04:08:16.938807   29022 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3903/.minikube/ca.pem
	I0128 04:08:16.938859   29022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3903/.minikube/ca.pem (1078 bytes)
	I0128 04:08:16.938964   29022 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3903/.minikube/cert.pem, removing ...
	I0128 04:08:16.938978   29022 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3903/.minikube/cert.pem
	I0128 04:08:16.939012   29022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3903/.minikube/cert.pem (1123 bytes)
	I0128 04:08:16.939086   29022 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3903/.minikube/key.pem, removing ...
	I0128 04:08:16.939094   29022 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3903/.minikube/key.pem
	I0128 04:08:16.939112   29022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3903/.minikube/key.pem (1679 bytes)
	I0128 04:08:16.939160   29022 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-994986 san=[192.168.83.15 192.168.83.15 localhost 127.0.0.1 minikube kubernetes-upgrade-994986]
	I0128 04:08:17.093835   29022 provision.go:172] copyRemoteCerts
	I0128 04:08:17.093903   29022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 04:08:17.093927   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.096940   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.097286   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.097323   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.097461   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.097667   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.097865   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.098060   29022 sshutil.go:53] new ssh client: &{IP:192.168.83.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/kubernetes-upgrade-994986/id_rsa Username:docker}
	I0128 04:08:17.181067   29022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0128 04:08:17.203249   29022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0128 04:08:17.228073   29022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 04:08:17.250782   29022 provision.go:86] duration metric: configureAuth took 318.925999ms
	I0128 04:08:17.250807   29022 buildroot.go:189] setting minikube options for container-runtime
	I0128 04:08:17.251016   29022 config.go:180] Loaded profile config "kubernetes-upgrade-994986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 04:08:17.251040   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:17.251270   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.253757   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.254164   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.254217   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.254341   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.254508   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.254694   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.254858   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.255027   29022 main.go:141] libmachine: Using SSH client type: native
	I0128 04:08:17.255158   29022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.83.15 22 <nil> <nil>}
	I0128 04:08:17.255171   29022 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 04:08:17.361758   29022 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0128 04:08:17.361782   29022 buildroot.go:70] root file system type: tmpfs
	I0128 04:08:17.361981   29022 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 04:08:17.362006   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.365107   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.365517   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.365566   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.365761   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.365965   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.366157   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.366312   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.366484   29022 main.go:141] libmachine: Using SSH client type: native
	I0128 04:08:17.366636   29022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.83.15 22 <nil> <nil>}
	I0128 04:08:17.366728   29022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 04:08:17.500094   29022 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 04:08:17.500129   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.503193   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.503638   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.503668   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.503945   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.504143   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.504325   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.504482   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.504654   29022 main.go:141] libmachine: Using SSH client type: native
	I0128 04:08:17.504829   29022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.83.15 22 <nil> <nil>}
	I0128 04:08:17.504854   29022 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 04:08:17.622110   29022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 04:08:17.622136   29022 machine.go:91] provisioned docker machine in 929.207043ms
	I0128 04:08:17.622150   29022 start.go:300] post-start starting for "kubernetes-upgrade-994986" (driver="kvm2")
	I0128 04:08:17.622159   29022 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 04:08:17.622185   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:17.622517   29022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 04:08:17.622551   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.625414   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.625887   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.625921   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.626139   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.626323   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.626503   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.626684   29022 sshutil.go:53] new ssh client: &{IP:192.168.83.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/kubernetes-upgrade-994986/id_rsa Username:docker}
	I0128 04:08:17.714138   29022 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 04:08:17.718403   29022 info.go:137] Remote host: Buildroot 2021.02.12
	I0128 04:08:17.718426   29022 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3903/.minikube/addons for local assets ...
	I0128 04:08:17.718489   29022 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3903/.minikube/files for local assets ...
	I0128 04:08:17.718579   29022 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3903/.minikube/files/etc/ssl/certs/110622.pem -> 110622.pem in /etc/ssl/certs
	I0128 04:08:17.718716   29022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 04:08:17.728839   29022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/files/etc/ssl/certs/110622.pem --> /etc/ssl/certs/110622.pem (1708 bytes)
	I0128 04:08:17.754476   29022 start.go:303] post-start completed in 132.310579ms
	I0128 04:08:17.754495   29022 fix.go:57] fixHost completed within 1.086776105s
	I0128 04:08:17.754521   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.758345   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.758915   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.758940   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.759289   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.759473   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.759730   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.759873   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.760053   29022 main.go:141] libmachine: Using SSH client type: native
	I0128 04:08:17.760222   29022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.83.15 22 <nil> <nil>}
	I0128 04:08:17.760236   29022 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0128 04:08:17.876316   29022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1674878897.869922584
	
	I0128 04:08:17.876341   29022 fix.go:207] guest clock: 1674878897.869922584
	I0128 04:08:17.876354   29022 fix.go:220] Guest: 2023-01-28 04:08:17.869922584 +0000 UTC Remote: 2023-01-28 04:08:17.754499124 +0000 UTC m=+16.652524418 (delta=115.42346ms)
	I0128 04:08:17.876378   29022 fix.go:191] guest clock delta is within tolerance: 115.42346ms
	I0128 04:08:17.876384   29022 start.go:83] releasing machines lock for "kubernetes-upgrade-994986", held for 1.208681301s
	I0128 04:08:17.876409   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:17.876678   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetIP
	I0128 04:08:17.879559   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.879901   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.879932   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.880121   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:17.880698   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:17.880877   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:17.880991   29022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 04:08:17.881031   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.881111   29022 ssh_runner.go:195] Run: cat /version.json
	I0128 04:08:17.881125   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.884280   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.884650   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.884679   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.884698   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.884932   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.885027   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.885045   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.885087   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.885235   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.885318   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.885384   29022 sshutil.go:53] new ssh client: &{IP:192.168.83.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/kubernetes-upgrade-994986/id_rsa Username:docker}
	I0128 04:08:17.885481   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.885617   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.885768   29022 sshutil.go:53] new ssh client: &{IP:192.168.83.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/kubernetes-upgrade-994986/id_rsa Username:docker}
	W0128 04:08:17.979757   29022 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
	I0128 04:08:17.979863   29022 ssh_runner.go:195] Run: systemctl --version
	I0128 04:08:18.004370   29022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0128 04:08:18.011919   29022 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0128 04:08:18.012025   29022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 04:08:18.023993   29022 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 04:08:18.041222   29022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0128 04:08:18.051173   29022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0128 04:08:18.066118   29022 cni.go:307] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0128 04:08:18.066139   29022 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 04:08:18.066253   29022 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 04:08:18.097095   29022 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 04:08:18.097120   29022 docker.go:560] Images already preloaded, skipping extraction
	I0128 04:08:18.097130   29022 start.go:472] detecting cgroup driver to use...
	I0128 04:08:18.097261   29022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 04:08:18.117843   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 04:08:18.133993   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 04:08:18.145418   29022 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 04:08:18.145484   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 04:08:18.176708   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 04:08:18.202186   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 04:08:18.223484   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 04:08:18.240083   29022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 04:08:18.264601   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 04:08:18.281983   29022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 04:08:18.300085   29022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 04:08:18.323975   29022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 04:08:18.520743   29022 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 04:08:18.540016   29022 start.go:472] detecting cgroup driver to use...
	I0128 04:08:18.540096   29022 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 04:08:18.556754   29022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0128 04:08:18.571199   29022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0128 04:08:18.596423   29022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0128 04:08:18.611640   29022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 04:08:18.623025   29022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 04:08:18.640377   29022 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 04:08:18.788949   29022 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 04:08:18.956600   29022 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 04:08:18.956631   29022 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 04:08:18.972993   29022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 04:08:19.142994   29022 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 04:08:20.021649   27997 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 04:08:20.021666   27997 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0128 04:08:20.021683   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:08:20.024866   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:08:20.025519   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:08:20.025545   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:08:20.025726   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:08:20.025894   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:08:20.026077   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:08:20.026226   27997 sshutil.go:53] new ssh client: &{IP:192.168.61.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/pause-539738/id_rsa Username:docker}
	I0128 04:08:20.033882   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0128 04:08:20.034220   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:20.034646   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:20.034662   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:20.034945   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:20.035142   27997 main.go:141] libmachine: (pause-539738) Calling .GetState
	I0128 04:08:20.036874   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:08:20.037194   27997 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0128 04:08:20.037218   27997 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0128 04:08:20.037237   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:08:20.040521   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:08:20.041037   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:08:20.041057   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:08:20.041208   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:08:20.041364   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:08:20.041504   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:08:20.041604   27997 sshutil.go:53] new ssh client: &{IP:192.168.61.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/pause-539738/id_rsa Username:docker}
	I0128 04:08:20.139989   27997 start.go:881] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0128 04:08:20.140046   27997 node_ready.go:35] waiting up to 6m0s for node "pause-539738" to be "Ready" ...
	I0128 04:08:20.143009   27997 node_ready.go:49] node "pause-539738" has status "Ready":"True"
	I0128 04:08:20.143027   27997 node_ready.go:38] duration metric: took 2.970545ms waiting for node "pause-539738" to be "Ready" ...
	I0128 04:08:20.143034   27997 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 04:08:20.148143   27997 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-jvdr8" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:20.171291   27997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 04:08:20.194995   27997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0128 04:08:20.216817   27997 pod_ready.go:92] pod "coredns-787d4945fb-jvdr8" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:20.216845   27997 pod_ready.go:81] duration metric: took 68.682415ms waiting for pod "coredns-787d4945fb-jvdr8" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:20.216857   27997 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:20.616097   27997 pod_ready.go:92] pod "etcd-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:20.616171   27997 pod_ready.go:81] duration metric: took 399.304996ms waiting for pod "etcd-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:20.616192   27997 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:21.029453   27997 pod_ready.go:92] pod "kube-apiserver-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:21.029481   27997 pod_ready.go:81] duration metric: took 413.271931ms waiting for pod "kube-apiserver-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:21.029497   27997 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:21.585474   27997 pod_ready.go:92] pod "kube-controller-manager-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:21.585504   27997 pod_ready.go:81] duration metric: took 555.998841ms waiting for pod "kube-controller-manager-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:21.585519   27997 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2vxmw" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:22.194411   27997 pod_ready.go:92] pod "kube-proxy-2vxmw" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:22.194435   27997 pod_ready.go:81] duration metric: took 608.908313ms waiting for pod "kube-proxy-2vxmw" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:22.194447   27997 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:22.270839   27997 pod_ready.go:92] pod "kube-scheduler-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:22.270869   27997 pod_ready.go:81] duration metric: took 76.409295ms waiting for pod "kube-scheduler-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:22.270881   27997 pod_ready.go:38] duration metric: took 2.127838794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 04:08:22.270907   27997 api_server.go:51] waiting for apiserver process to appear ...
	I0128 04:08:22.270958   27997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 04:08:22.523676   27997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.352347256s)
	I0128 04:08:22.523720   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.523733   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.523819   27997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.328799927s)
	I0128 04:08:22.523832   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.523840   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.523883   27997 api_server.go:71] duration metric: took 2.553813709s to wait for apiserver process to appear ...
	I0128 04:08:22.523890   27997 api_server.go:87] waiting for apiserver healthz status ...
	I0128 04:08:22.523901   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:08:22.527469   27997 main.go:141] libmachine: (pause-539738) DBG | Closing plugin on server side
	I0128 04:08:22.527525   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.527543   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.527552   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.527566   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.527575   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.527590   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.527579   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.527652   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.527673   27997 main.go:141] libmachine: (pause-539738) DBG | Closing plugin on server side
	I0128 04:08:22.527862   27997 main.go:141] libmachine: (pause-539738) DBG | Closing plugin on server side
	I0128 04:08:22.527907   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.527918   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.527942   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.527955   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.527971   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.527981   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.528224   27997 main.go:141] libmachine: (pause-539738) DBG | Closing plugin on server side
	I0128 04:08:22.528266   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.528281   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.530083   27997 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0128 04:08:18.134024   29125 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0128 04:08:18.134174   29125 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:18.134230   29125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:18.149057   29125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44709
	I0128 04:08:18.149437   29125 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:18.150172   29125 main.go:141] libmachine: Using API Version  1
	I0128 04:08:18.150195   29125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:18.150602   29125 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:18.150838   29125 main.go:141] libmachine: (force-systemd-flag-746602) Calling .GetMachineName
	I0128 04:08:18.151044   29125 main.go:141] libmachine: (force-systemd-flag-746602) Calling .DriverName
	I0128 04:08:18.151265   29125 start.go:159] libmachine.API.Create for "force-systemd-flag-746602" (driver="kvm2")
	I0128 04:08:18.151303   29125 client.go:168] LocalClient.Create starting
	I0128 04:08:18.151339   29125 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem
	I0128 04:08:18.151386   29125 main.go:141] libmachine: Decoding PEM data...
	I0128 04:08:18.151431   29125 main.go:141] libmachine: Parsing certificate...
	I0128 04:08:18.151521   29125 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3903/.minikube/certs/cert.pem
	I0128 04:08:18.151555   29125 main.go:141] libmachine: Decoding PEM data...
	I0128 04:08:18.151582   29125 main.go:141] libmachine: Parsing certificate...
	I0128 04:08:18.151619   29125 main.go:141] libmachine: Running pre-create checks...
	I0128 04:08:18.151636   29125 main.go:141] libmachine: (force-systemd-flag-746602) Calling .PreCreateCheck
	I0128 04:08:18.152097   29125 main.go:141] libmachine: (force-systemd-flag-746602) Calling .GetConfigRaw
	I0128 04:08:18.152608   29125 main.go:141] libmachine: Creating machine...
	I0128 04:08:18.152629   29125 main.go:141] libmachine: (force-systemd-flag-746602) Calling .Create
	I0128 04:08:18.152783   29125 main.go:141] libmachine: (force-systemd-flag-746602) Creating KVM machine...
	I0128 04:08:18.154221   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | found existing default KVM network
	I0128 04:08:18.155948   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:18.155769   29147 network.go:295] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc0001881c8] misses:0}
	I0128 04:08:18.155985   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:18.155866   29147 network.go:241] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0128 04:08:18.161375   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | trying to create private KVM network mk-force-systemd-flag-746602 192.168.39.0/24...
	I0128 04:08:18.253704   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | private KVM network mk-force-systemd-flag-746602 192.168.39.0/24 created
	I0128 04:08:18.253823   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting up store path in /home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602 ...
	I0128 04:08:18.253933   29125 main.go:141] libmachine: (force-systemd-flag-746602) Building disk image from file:///home/jenkins/minikube-integration/15565-3903/.minikube/cache/iso/amd64/minikube-v1.29.0-1674856271-15565-amd64.iso
	I0128 04:08:18.254045   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:18.253983   29147 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/15565-3903/.minikube
	I0128 04:08:18.254165   29125 main.go:141] libmachine: (force-systemd-flag-746602) Downloading /home/jenkins/minikube-integration/15565-3903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/15565-3903/.minikube/cache/iso/amd64/minikube-v1.29.0-1674856271-15565-amd64.iso...
	I0128 04:08:18.453457   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:18.453336   29147 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602/id_rsa...
	I0128 04:08:18.505183   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:18.505075   29147 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602/force-systemd-flag-746602.rawdisk...
	I0128 04:08:18.505216   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Writing magic tar header
	I0128 04:08:18.505231   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Writing SSH key tar header
	I0128 04:08:18.505250   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:18.505185   29147 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602 ...
	I0128 04:08:18.505335   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602
	I0128 04:08:18.505373   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting executable bit set on /home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602 (perms=drwx------)
	I0128 04:08:18.505386   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15565-3903/.minikube/machines
	I0128 04:08:18.505417   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15565-3903/.minikube
	I0128 04:08:18.505434   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15565-3903
	I0128 04:08:18.505451   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0128 04:08:18.505469   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home/jenkins
	I0128 04:08:18.505486   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home
	I0128 04:08:18.505503   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting executable bit set on /home/jenkins/minikube-integration/15565-3903/.minikube/machines (perms=drwxrwxr-x)
	I0128 04:08:18.505518   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Skipping /home - not owner
	I0128 04:08:18.505535   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting executable bit set on /home/jenkins/minikube-integration/15565-3903/.minikube (perms=drwxr-xr-x)
	I0128 04:08:18.505551   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting executable bit set on /home/jenkins/minikube-integration/15565-3903 (perms=drwxrwxr-x)
	I0128 04:08:18.505564   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0128 04:08:18.505578   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0128 04:08:18.505593   29125 main.go:141] libmachine: (force-systemd-flag-746602) Creating domain...
	I0128 04:08:18.506767   29125 main.go:141] libmachine: (force-systemd-flag-746602) define libvirt domain using xml: 
	I0128 04:08:18.506794   29125 main.go:141] libmachine: (force-systemd-flag-746602) <domain type='kvm'>
	I0128 04:08:18.506807   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <name>force-systemd-flag-746602</name>
	I0128 04:08:18.506822   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <memory unit='MiB'>2048</memory>
	I0128 04:08:18.506833   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <vcpu>2</vcpu>
	I0128 04:08:18.506840   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <features>
	I0128 04:08:18.506846   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <acpi/>
	I0128 04:08:18.506858   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <apic/>
	I0128 04:08:18.506891   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <pae/>
	I0128 04:08:18.506915   29125 main.go:141] libmachine: (force-systemd-flag-746602)     
	I0128 04:08:18.506928   29125 main.go:141] libmachine: (force-systemd-flag-746602)   </features>
	I0128 04:08:18.506944   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <cpu mode='host-passthrough'>
	I0128 04:08:18.506956   29125 main.go:141] libmachine: (force-systemd-flag-746602)   
	I0128 04:08:18.506968   29125 main.go:141] libmachine: (force-systemd-flag-746602)   </cpu>
	I0128 04:08:18.506983   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <os>
	I0128 04:08:18.506995   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <type>hvm</type>
	I0128 04:08:18.507009   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <boot dev='cdrom'/>
	I0128 04:08:18.507019   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <boot dev='hd'/>
	I0128 04:08:18.507031   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <bootmenu enable='no'/>
	I0128 04:08:18.507040   29125 main.go:141] libmachine: (force-systemd-flag-746602)   </os>
	I0128 04:08:18.507048   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <devices>
	I0128 04:08:18.507059   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <disk type='file' device='cdrom'>
	I0128 04:08:18.507077   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <source file='/home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602/boot2docker.iso'/>
	I0128 04:08:18.507111   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <target dev='hdc' bus='scsi'/>
	I0128 04:08:18.507133   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <readonly/>
	I0128 04:08:18.507147   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </disk>
	I0128 04:08:18.507158   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <disk type='file' device='disk'>
	I0128 04:08:18.507174   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0128 04:08:18.507192   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <source file='/home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602/force-systemd-flag-746602.rawdisk'/>
	I0128 04:08:18.507204   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <target dev='hda' bus='virtio'/>
	I0128 04:08:18.507216   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </disk>
	I0128 04:08:18.507227   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <interface type='network'>
	I0128 04:08:18.507239   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <source network='mk-force-systemd-flag-746602'/>
	I0128 04:08:18.507253   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <model type='virtio'/>
	I0128 04:08:18.507273   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </interface>
	I0128 04:08:18.507287   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <interface type='network'>
	I0128 04:08:18.507298   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <source network='default'/>
	I0128 04:08:18.507312   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <model type='virtio'/>
	I0128 04:08:18.507321   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </interface>
	I0128 04:08:18.507335   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <serial type='pty'>
	I0128 04:08:18.507352   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <target port='0'/>
	I0128 04:08:18.507363   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </serial>
	I0128 04:08:18.507377   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <console type='pty'>
	I0128 04:08:18.507408   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <target type='serial' port='0'/>
	I0128 04:08:18.507423   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </console>
	I0128 04:08:18.507432   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <rng model='virtio'>
	I0128 04:08:18.507447   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <backend model='random'>/dev/random</backend>
	I0128 04:08:18.507458   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </rng>
	I0128 04:08:18.507470   29125 main.go:141] libmachine: (force-systemd-flag-746602)     
	I0128 04:08:18.507485   29125 main.go:141] libmachine: (force-systemd-flag-746602)     
	I0128 04:08:18.507501   29125 main.go:141] libmachine: (force-systemd-flag-746602)   </devices>
	I0128 04:08:18.507513   29125 main.go:141] libmachine: (force-systemd-flag-746602) </domain>
	I0128 04:08:18.507524   29125 main.go:141] libmachine: (force-systemd-flag-746602) 
	I0128 04:08:18.512069   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:b6:7f:c4 in network default
	I0128 04:08:18.512643   29125 main.go:141] libmachine: (force-systemd-flag-746602) Ensuring networks are active...
	I0128 04:08:18.512669   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:18.513466   29125 main.go:141] libmachine: (force-systemd-flag-746602) Ensuring network default is active
	I0128 04:08:18.513773   29125 main.go:141] libmachine: (force-systemd-flag-746602) Ensuring network mk-force-systemd-flag-746602 is active
	I0128 04:08:18.514437   29125 main.go:141] libmachine: (force-systemd-flag-746602) Getting domain xml...
	I0128 04:08:18.515321   29125 main.go:141] libmachine: (force-systemd-flag-746602) Creating domain...
	I0128 04:08:19.921648   29125 main.go:141] libmachine: (force-systemd-flag-746602) Waiting to get IP...
	I0128 04:08:19.922401   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:19.922814   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:19.922838   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:19.922798   29147 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0128 04:08:20.187271   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:20.187847   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:20.187880   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:20.187755   29147 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0128 04:08:20.570464   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:20.571021   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:20.571057   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:20.570986   29147 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0128 04:08:20.995680   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:20.996369   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:20.996395   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:20.996317   29147 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0128 04:08:21.470972   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:21.471651   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:21.471685   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:21.471567   29147 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0128 04:08:22.060556   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:22.061111   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:22.061158   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:22.061063   29147 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0128 04:08:22.896406   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:22.896868   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:22.896899   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:22.896811   29147 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0128 04:08:22.531478   27997 addons.go:488] enableAddons completed in 2.565153495s
	I0128 04:08:22.536235   27997 api_server.go:278] https://192.168.61.35:8443/healthz returned 200:
	ok
	I0128 04:08:22.546519   27997 api_server.go:140] control plane version: v1.26.1
	I0128 04:08:22.546542   27997 api_server.go:130] duration metric: took 22.645385ms to wait for apiserver health ...
	I0128 04:08:22.546567   27997 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 04:08:22.566601   27997 system_pods.go:59] 7 kube-system pods found
	I0128 04:08:22.566636   27997 system_pods.go:61] "coredns-787d4945fb-jvdr8" [9d5d58d3-36c6-44d2-bf2d-2297c435af12] Running
	I0128 04:08:22.566645   27997 system_pods.go:61] "etcd-pause-539738" [4e925a1f-e8e7-463f-9ca5-30f3bcf9e034] Running
	I0128 04:08:22.566652   27997 system_pods.go:61] "kube-apiserver-pause-539738" [b89c18b3-bea5-480d-8059-6f1909701f9b] Running
	I0128 04:08:22.566665   27997 system_pods.go:61] "kube-controller-manager-pause-539738" [6a7def17-49f7-49d3-9bc6-94c176e59887] Running
	I0128 04:08:22.566743   27997 system_pods.go:61] "kube-proxy-2vxmw" [f0971d3d-f13f-421d-a7db-fa18ee862abb] Running
	I0128 04:08:22.566750   27997 system_pods.go:61] "kube-scheduler-pause-539738" [bf3dd75f-9d11-4088-8afc-6e0200586918] Running
	I0128 04:08:22.566757   27997 system_pods.go:61] "storage-provisioner" [28af396f-4ec7-455c-afe3-469c018c0197] Pending
	I0128 04:08:22.566764   27997 system_pods.go:74] duration metric: took 20.191146ms to wait for pod list to return data ...
	I0128 04:08:22.566780   27997 default_sa.go:34] waiting for default service account to be created ...
	I0128 04:08:22.620765   27997 default_sa.go:45] found service account: "default"
	I0128 04:08:22.620791   27997 default_sa.go:55] duration metric: took 54.004254ms for default service account to be created ...
	I0128 04:08:22.620801   27997 system_pods.go:116] waiting for k8s-apps to be running ...
	I0128 04:08:22.820897   27997 system_pods.go:86] 7 kube-system pods found
	I0128 04:08:22.820980   27997 system_pods.go:89] "coredns-787d4945fb-jvdr8" [9d5d58d3-36c6-44d2-bf2d-2297c435af12] Running
	I0128 04:08:22.820994   27997 system_pods.go:89] "etcd-pause-539738" [4e925a1f-e8e7-463f-9ca5-30f3bcf9e034] Running
	I0128 04:08:22.821001   27997 system_pods.go:89] "kube-apiserver-pause-539738" [b89c18b3-bea5-480d-8059-6f1909701f9b] Running
	I0128 04:08:22.821009   27997 system_pods.go:89] "kube-controller-manager-pause-539738" [6a7def17-49f7-49d3-9bc6-94c176e59887] Running
	I0128 04:08:22.821026   27997 system_pods.go:89] "kube-proxy-2vxmw" [f0971d3d-f13f-421d-a7db-fa18ee862abb] Running
	I0128 04:08:22.821033   27997 system_pods.go:89] "kube-scheduler-pause-539738" [bf3dd75f-9d11-4088-8afc-6e0200586918] Running
	I0128 04:08:22.821048   27997 system_pods.go:89] "storage-provisioner" [28af396f-4ec7-455c-afe3-469c018c0197] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0128 04:08:22.821061   27997 system_pods.go:126] duration metric: took 200.254117ms to wait for k8s-apps to be running ...
	I0128 04:08:22.821072   27997 system_svc.go:44] waiting for kubelet service to be running ....
	I0128 04:08:22.821120   27997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 04:08:22.836608   27997 system_svc.go:56] duration metric: took 15.525635ms WaitForService to wait for kubelet.
	I0128 04:08:22.836632   27997 kubeadm.go:578] duration metric: took 2.866561868s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0128 04:08:22.836651   27997 node_conditions.go:102] verifying NodePressure condition ...
	I0128 04:08:23.017898   27997 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0128 04:08:23.017942   27997 node_conditions.go:123] node cpu capacity is 2
	I0128 04:08:23.017956   27997 node_conditions.go:105] duration metric: took 181.298919ms to run NodePressure ...
	I0128 04:08:23.017971   27997 start.go:226] waiting for startup goroutines ...
	I0128 04:08:23.018318   27997 ssh_runner.go:195] Run: rm -f paused
	I0128 04:08:23.106272   27997 start.go:538] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0128 04:08:23.108522   27997 out.go:177] * Done! kubectl is now configured to use "pause-539738" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-01-28 04:05:00 UTC, ends at Sat 2023-01-28 04:08:24 UTC. --
	Jan 28 04:08:05 pause-539738 dockerd[5359]: time="2023-01-28T04:08:05.218668258Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/28d6d166486a139da399ace5235173174b30a7fea42852988138278879272e63 pid=7948 runtime=io.containerd.runc.v2
	Jan 28 04:08:05 pause-539738 dockerd[5359]: time="2023-01-28T04:08:05.219158098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:08:05 pause-539738 dockerd[5359]: time="2023-01-28T04:08:05.219250357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:08:05 pause-539738 dockerd[5359]: time="2023-01-28T04:08:05.219271823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:08:05 pause-539738 dockerd[5359]: time="2023-01-28T04:08:05.219392373Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a57407ff9027105bc7270d3144d4687dd0cc4b00c60a446a5251ee1aba3137f2 pid=7958 runtime=io.containerd.runc.v2
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.054027780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.054287207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.054300203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.055468854Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/07f5b7ae3031e496dd4adb87df4c3504050e3ae8ad1f880c3c8ac1146edceb11 pid=8116 runtime=io.containerd.runc.v2
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.072042100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.072130150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.072141935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.072698555Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/16c544267dffcb63c0d09b5e96b77c5b7d4df254822a006bcc4ebbcbeb321c0f pid=8131 runtime=io.containerd.runc.v2
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.849050130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.849182209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.849208479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.849664772Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b6f504145d4562aee4ad585627a09297af2a971605b2060b9f2d20c903ba8876 pid=8313 runtime=io.containerd.runc.v2
	Jan 28 04:08:22 pause-539738 dockerd[5359]: time="2023-01-28T04:08:22.990275068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:08:22 pause-539738 dockerd[5359]: time="2023-01-28T04:08:22.990355281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:08:22 pause-539738 dockerd[5359]: time="2023-01-28T04:08:22.990369559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:08:22 pause-539738 dockerd[5359]: time="2023-01-28T04:08:22.991196497Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/afad6178a4e892e3bf961bb8dbba33ae9d8de4014d4b952f0758349d71fc45a7 pid=8549 runtime=io.containerd.runc.v2
	Jan 28 04:08:23 pause-539738 dockerd[5359]: time="2023-01-28T04:08:23.726919019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:08:23 pause-539738 dockerd[5359]: time="2023-01-28T04:08:23.727102052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:08:23 pause-539738 dockerd[5359]: time="2023-01-28T04:08:23.727117818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:08:23 pause-539738 dockerd[5359]: time="2023-01-28T04:08:23.727993188Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/134943d77a41e1c9c63e040a331c6acbd016d64079843e1ad94b581734bf60f0 pid=8601 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	134943d77a41e       6e38f40d628db       1 second ago        Running             storage-provisioner       0                   afad6178a4e89
	b6f504145d456       5185b96f0becf       13 seconds ago      Running             coredns                   2                   16c544267dffc
	07f5b7ae3031e       46a6bb3c77ce0       14 seconds ago      Running             kube-proxy                3                   eadfcd7aabadf
	a57407ff90271       655493523f607       20 seconds ago      Running             kube-scheduler            2                   1015b60a11e04
	28d6d166486a1       fce326961ae2d       20 seconds ago      Running             etcd                      3                   7c05d806c6cec
	d7554f64ab0e3       e9c08e11b07f6       24 seconds ago      Running             kube-controller-manager   2                   c29f61afeb3d7
	4dde87c760c48       deb04688c4a35       25 seconds ago      Running             kube-apiserver            3                   3da0b0f6c1f51
	f4d02970c201c       fce326961ae2d       42 seconds ago      Exited              etcd                      2                   b1adfd0dc97e3
	689f2394c8595       46a6bb3c77ce0       43 seconds ago      Exited              kube-proxy                2                   4cbfacd312e56
	ecd079acd243b       5185b96f0becf       59 seconds ago      Exited              coredns                   1                   29628336e08f4
	a247c449d214a       655493523f607       59 seconds ago      Exited              kube-scheduler            1                   ae418143b0c22
	7a3e62c8e65a3       deb04688c4a35       59 seconds ago      Exited              kube-apiserver            2                   3c2eca72a2a1d
	be6ac4b353504       e9c08e11b07f6       59 seconds ago      Exited              kube-controller-manager   1                   f537afa7d5fe6
	
	* 
	* ==> coredns [b6f504145d45] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:49720 - 3640 "HINFO IN 7174251185602643581.328645765898013938. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021702169s
	
	* 
	* ==> coredns [ecd079acd243] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:52201 - 59003 "HINFO IN 7374916446888445961.4050626325103425631. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029908073s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-539738
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-539738
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a22b9432724c1a7c0bfc1f92a18db163006c245
	                    minikube.k8s.io/name=pause-539738
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_28T04_05_47_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 28 Jan 2023 04:05:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-539738
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 28 Jan 2023 04:08:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 28 Jan 2023 04:08:09 +0000   Sat, 28 Jan 2023 04:05:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 28 Jan 2023 04:08:09 +0000   Sat, 28 Jan 2023 04:05:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 28 Jan 2023 04:08:09 +0000   Sat, 28 Jan 2023 04:05:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 28 Jan 2023 04:08:09 +0000   Sat, 28 Jan 2023 04:05:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.35
	  Hostname:    pause-539738
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e00382430134c4f8b57880d028c449b
	  System UUID:                7e003824-3013-4c4f-8b57-880d028c449b
	  Boot ID:                    5555b58d-bd4c-415b-8db1-9d0778132685
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-jvdr8                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m25s
	  kube-system                 etcd-pause-539738                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-apiserver-pause-539738             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-controller-manager-pause-539738    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-proxy-2vxmw                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  kube-system                 kube-scheduler-pause-539738             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m22s              kube-proxy       
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 2m38s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m37s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m37s              kubelet          Node pause-539738 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m37s              kubelet          Node pause-539738 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m37s              kubelet          Node pause-539738 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                2m32s              kubelet          Node pause-539738 status is now: NodeReady
	  Normal  RegisteredNode           2m26s              node-controller  Node pause-539738 event: Registered Node pause-539738 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 21s)  kubelet          Node pause-539738 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 21s)  kubelet          Node pause-539738 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 21s)  kubelet          Node pause-539738 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node pause-539738 event: Registered Node pause-539738 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.003769] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.440171] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +0.293151] systemd-fstab-generator[936]: Ignoring "noauto" for root device
	[  +0.138510] systemd-fstab-generator[947]: Ignoring "noauto" for root device
	[  +0.155713] systemd-fstab-generator[960]: Ignoring "noauto" for root device
	[  +1.659489] systemd-fstab-generator[1107]: Ignoring "noauto" for root device
	[  +0.186554] systemd-fstab-generator[1118]: Ignoring "noauto" for root device
	[  +0.158164] systemd-fstab-generator[1129]: Ignoring "noauto" for root device
	[  +0.136901] systemd-fstab-generator[1140]: Ignoring "noauto" for root device
	[  +5.553840] systemd-fstab-generator[1388]: Ignoring "noauto" for root device
	[  +1.018295] kauditd_printk_skb: 68 callbacks suppressed
	[ +12.937235] systemd-fstab-generator[2413]: Ignoring "noauto" for root device
	[Jan28 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[ +10.690783] kauditd_printk_skb: 26 callbacks suppressed
	[Jan28 04:07] systemd-fstab-generator[4580]: Ignoring "noauto" for root device
	[  +0.249508] systemd-fstab-generator[4610]: Ignoring "noauto" for root device
	[  +0.187537] systemd-fstab-generator[4621]: Ignoring "noauto" for root device
	[  +0.203657] systemd-fstab-generator[4650]: Ignoring "noauto" for root device
	[  +9.772707] systemd-fstab-generator[5759]: Ignoring "noauto" for root device
	[  +0.138072] systemd-fstab-generator[5777]: Ignoring "noauto" for root device
	[  +0.133438] systemd-fstab-generator[5804]: Ignoring "noauto" for root device
	[  +0.114025] systemd-fstab-generator[5815]: Ignoring "noauto" for root device
	[  +1.191393] kauditd_printk_skb: 34 callbacks suppressed
	[Jan28 04:08] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.711051] systemd-fstab-generator[7766]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [28d6d166486a] <==
	* {"level":"info","ts":"2023-01-28T04:08:16.661Z","caller":"traceutil/trace.go:171","msg":"trace[2092030425] range","detail":"{range_begin:/registry/minions/pause-539738; range_end:; response_count:1; response_revision:493; }","duration":"188.603075ms","start":"2023-01-28T04:08:16.472Z","end":"2023-01-28T04:08:16.661Z","steps":["trace[2092030425] 'range keys from in-memory index tree'  (duration: 187.405043ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-28T04:08:21.575Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"157.589939ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10495786225604958759 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" value_size:641 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-01-28T04:08:21.575Z","caller":"traceutil/trace.go:171","msg":"trace[180746045] linearizableReadLoop","detail":"{readStateIndex:553; appliedIndex:552; }","duration":"225.160028ms","start":"2023-01-28T04:08:21.350Z","end":"2023-01-28T04:08:21.575Z","steps":["trace[180746045] 'read index received'  (duration: 67.452876ms)","trace[180746045] 'applied index is now lower than readState.Index'  (duration: 157.706304ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-28T04:08:21.576Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"168.339002ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-539738\" ","response":"range_response_count:1 size:4488"}
	{"level":"info","ts":"2023-01-28T04:08:21.576Z","caller":"traceutil/trace.go:171","msg":"trace[1674861722] range","detail":"{range_begin:/registry/minions/pause-539738; range_end:; response_count:1; response_revision:503; }","duration":"168.400561ms","start":"2023-01-28T04:08:21.407Z","end":"2023-01-28T04:08:21.576Z","steps":["trace[1674861722] 'agreement among raft nodes before linearized reading'  (duration: 168.279728ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-28T04:08:21.577Z","caller":"traceutil/trace.go:171","msg":"trace[1746340332] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"257.498052ms","start":"2023-01-28T04:08:21.319Z","end":"2023-01-28T04:08:21.577Z","steps":["trace[1746340332] 'process raft request'  (duration: 98.427552ms)","trace[1746340332] 'compare'  (duration: 157.502381ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-28T04:08:21.577Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"227.060657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-01-28T04:08:21.579Z","caller":"traceutil/trace.go:171","msg":"trace[1333583828] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:503; }","duration":"229.205222ms","start":"2023-01-28T04:08:21.350Z","end":"2023-01-28T04:08:21.579Z","steps":["trace[1333583828] 'agreement among raft nodes before linearized reading'  (duration: 225.338908ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-28T04:08:22.178Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"129.414872ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10495786225604958763 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/storage-provisioner\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/storage-provisioner\" value_size:1073 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-01-28T04:08:22.178Z","caller":"traceutil/trace.go:171","msg":"trace[306829421] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"593.78677ms","start":"2023-01-28T04:08:21.584Z","end":"2023-01-28T04:08:22.178Z","steps":["trace[306829421] 'process raft request'  (duration: 463.865421ms)","trace[306829421] 'compare'  (duration: 129.336372ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-28T04:08:22.178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-28T04:08:21.584Z","time spent":"593.889869ms","remote":"127.0.0.1:52650","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1130,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/clusterrolebindings/storage-provisioner\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/storage-provisioner\" value_size:1073 >> failure:<>"}
	{"level":"info","ts":"2023-01-28T04:08:22.178Z","caller":"traceutil/trace.go:171","msg":"trace[369834852] linearizableReadLoop","detail":"{readStateIndex:554; appliedIndex:553; }","duration":"592.935062ms","start":"2023-01-28T04:08:21.585Z","end":"2023-01-28T04:08:22.178Z","steps":["trace[369834852] 'read index received'  (duration: 462.989395ms)","trace[369834852] 'applied index is now lower than readState.Index'  (duration: 129.943975ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-28T04:08:22.179Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"570.627526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-2vxmw\" ","response":"range_response_count:1 size:4540"}
	{"level":"info","ts":"2023-01-28T04:08:22.179Z","caller":"traceutil/trace.go:171","msg":"trace[308989808] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-2vxmw; range_end:; response_count:1; response_revision:504; }","duration":"570.681436ms","start":"2023-01-28T04:08:21.608Z","end":"2023-01-28T04:08:22.179Z","steps":["trace[308989808] 'agreement among raft nodes before linearized reading'  (duration: 570.53031ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-28T04:08:22.179Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-28T04:08:21.608Z","time spent":"570.718631ms","remote":"127.0.0.1:52604","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4564,"request content":"key:\"/registry/pods/kube-system/kube-proxy-2vxmw\" "}
	{"level":"warn","ts":"2023-01-28T04:08:22.179Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"519.566846ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-01-28T04:08:22.179Z","caller":"traceutil/trace.go:171","msg":"trace[142877402] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:504; }","duration":"519.605723ms","start":"2023-01-28T04:08:21.659Z","end":"2023-01-28T04:08:22.179Z","steps":["trace[142877402] 'agreement among raft nodes before linearized reading'  (duration: 519.554981ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-28T04:08:22.179Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-28T04:08:21.659Z","time spent":"519.692591ms","remote":"127.0.0.1:52616","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-01-28T04:08:22.182Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"597.162555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-01-28T04:08:22.183Z","caller":"traceutil/trace.go:171","msg":"trace[1078778854] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:504; }","duration":"597.834137ms","start":"2023-01-28T04:08:21.585Z","end":"2023-01-28T04:08:22.183Z","steps":["trace[1078778854] 'agreement among raft nodes before linearized reading'  (duration: 593.021189ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-28T04:08:22.183Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-28T04:08:21.585Z","time spent":"598.11911ms","remote":"127.0.0.1:52606","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":231,"request content":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" "}
	{"level":"info","ts":"2023-01-28T04:08:22.439Z","caller":"traceutil/trace.go:171","msg":"trace[71657755] linearizableReadLoop","detail":"{readStateIndex:556; appliedIndex:555; }","duration":"159.073405ms","start":"2023-01-28T04:08:22.280Z","end":"2023-01-28T04:08:22.439Z","steps":["trace[71657755] 'read index received'  (duration: 118.473492ms)","trace[71657755] 'applied index is now lower than readState.Index'  (duration: 40.599279ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-28T04:08:22.439Z","caller":"traceutil/trace.go:171","msg":"trace[981654493] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"162.174517ms","start":"2023-01-28T04:08:22.277Z","end":"2023-01-28T04:08:22.439Z","steps":["trace[981654493] 'process raft request'  (duration: 121.217238ms)","trace[981654493] 'compare'  (duration: 40.233397ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-28T04:08:22.440Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"160.273309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2023-01-28T04:08:22.440Z","caller":"traceutil/trace.go:171","msg":"trace[692385285] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:506; }","duration":"160.345071ms","start":"2023-01-28T04:08:22.280Z","end":"2023-01-28T04:08:22.440Z","steps":["trace[692385285] 'agreement among raft nodes before linearized reading'  (duration: 159.502165ms)"],"step_count":1}
	
	* 
	* ==> etcd [f4d02970c201] <==
	* {"level":"info","ts":"2023-01-28T04:07:43.462Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-28T04:07:43.462Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"feba1a131c3b91a8","initial-advertise-peer-urls":["https://192.168.61.35:2380"],"listen-peer-urls":["https://192.168.61.35:2380"],"advertise-client-urls":["https://192.168.61.35:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.35:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-28T04:07:43.462Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-28T04:07:43.462Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.61.35:2380"}
	{"level":"info","ts":"2023-01-28T04:07:43.462Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.35:2380"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feba1a131c3b91a8 is starting a new election at term 3"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feba1a131c3b91a8 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feba1a131c3b91a8 received MsgPreVoteResp from feba1a131c3b91a8 at term 3"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feba1a131c3b91a8 became candidate at term 4"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feba1a131c3b91a8 received MsgVoteResp from feba1a131c3b91a8 at term 4"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feba1a131c3b91a8 became leader at term 4"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: feba1a131c3b91a8 elected leader feba1a131c3b91a8 at term 4"}
	{"level":"info","ts":"2023-01-28T04:07:44.546Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"feba1a131c3b91a8","local-member-attributes":"{Name:pause-539738 ClientURLs:[https://192.168.61.35:2379]}","request-path":"/0/members/feba1a131c3b91a8/attributes","cluster-id":"960419a4944238d5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-28T04:07:44.546Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T04:07:44.546Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T04:07:44.548Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.35:2379"}
	{"level":"info","ts":"2023-01-28T04:07:44.549Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-28T04:07:44.549Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-28T04:07:44.549Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-28T04:07:56.797Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-01-28T04:07:56.797Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-539738","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.35:2380"],"advertise-client-urls":["https://192.168.61.35:2379"]}
	{"level":"info","ts":"2023-01-28T04:07:56.801Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"feba1a131c3b91a8","current-leader-member-id":"feba1a131c3b91a8"}
	{"level":"info","ts":"2023-01-28T04:07:56.804Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.61.35:2380"}
	{"level":"info","ts":"2023-01-28T04:07:56.805Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.61.35:2380"}
	{"level":"info","ts":"2023-01-28T04:07:56.805Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-539738","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.35:2380"],"advertise-client-urls":["https://192.168.61.35:2379"]}
	
	* 
	* ==> kernel <==
	*  04:08:25 up 3 min,  0 users,  load average: 2.40, 1.00, 0.39
	Linux pause-539738 5.10.57 #1 SMP Sat Jan 28 02:15:18 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4dde87c760c4] <==
	* I0128 04:08:08.911224       1 cache.go:39] Caches are synced for autoregister controller
	I0128 04:08:08.911504       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0128 04:08:08.912003       1 shared_informer.go:280] Caches are synced for configmaps
	I0128 04:08:08.913564       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0128 04:08:08.913576       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0128 04:08:08.914077       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0128 04:08:09.428559       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0128 04:08:09.718671       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0128 04:08:10.731127       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0128 04:08:10.751795       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0128 04:08:10.814948       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0128 04:08:10.860919       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0128 04:08:10.870327       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0128 04:08:22.180103       1 trace.go:219] Trace[550369369]: "Create" accept:application/json,audit-id:3111c81f-ae52-4c3c-8730-881fcf0136aa,client:127.0.0.1,protocol:HTTP/2.0,resource:clusterrolebindings,scope:resource,url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings,user-agent:kubectl/v1.26.1 (linux/amd64) kubernetes/8f94681,verb:POST (28-Jan-2023 04:08:21.583) (total time: 596ms):
	Trace[550369369]: ["Create etcd3" audit-id:3111c81f-ae52-4c3c-8730-881fcf0136aa,key:/clusterrolebindings/storage-provisioner,type:*rbac.ClusterRoleBinding,resource:clusterrolebindings.rbac.authorization.k8s.io 595ms (04:08:21.584)
	Trace[550369369]:  ---"Txn call succeeded" 595ms (04:08:22.179)]
	Trace[550369369]: [596.064806ms] [596.064806ms] END
	I0128 04:08:22.182118       1 trace.go:219] Trace[1034241223]: "Get" accept:application/json, */*,audit-id:740a3e31-1f59-461a-9d83-fe2bd4ba4623,client:192.168.61.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-2vxmw,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (28-Jan-2023 04:08:21.607) (total time: 574ms):
	Trace[1034241223]: ---"About to write a response" 572ms (04:08:22.180)
	Trace[1034241223]: [574.421356ms] [574.421356ms] END
	I0128 04:08:22.185810       1 trace.go:219] Trace[2033090596]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:b0817005-4ac4-45fd-b0f2-e4e3f0367c68,client:192.168.61.35,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/deployment-controller,user-agent:kube-controller-manager/v1.26.1 (linux/amd64) kubernetes/8f94681/kube-controller-manager,verb:GET (28-Jan-2023 04:08:21.584) (total time: 600ms):
	Trace[2033090596]: ---"About to write a response" 600ms (04:08:22.185)
	Trace[2033090596]: [600.936521ms] [600.936521ms] END
	I0128 04:08:22.448820       1 controller.go:615] quota admission added evaluator for: endpoints
	I0128 04:08:22.565271       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [7a3e62c8e65a] <==
	* W0128 04:07:37.282358       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0128 04:07:42.285431       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0128 04:07:42.398490       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0128 04:07:46.535702       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-controller-manager [be6ac4b35350] <==
	* I0128 04:07:26.497184       1 serving.go:348] Generated self-signed cert in-memory
	I0128 04:07:26.975139       1 controllermanager.go:182] Version: v1.26.1
	I0128 04:07:26.975179       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 04:07:26.978587       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0128 04:07:26.979626       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0128 04:07:26.979835       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0128 04:07:26.979919       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	F0128 04:07:47.540500       1 controllermanager.go:228] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.61.35:8443/healthz": dial tcp 192.168.61.35:8443: connect: connection refused
	
	* 
	* ==> kube-controller-manager [d7554f64ab0e] <==
	* I0128 04:08:22.557808       1 shared_informer.go:280] Caches are synced for job
	I0128 04:08:22.567163       1 shared_informer.go:280] Caches are synced for taint
	I0128 04:08:22.567400       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	W0128 04:08:22.567641       1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-539738. Assuming now as a timestamp.
	I0128 04:08:22.567717       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0128 04:08:22.567887       1 event.go:294] "Event occurred" object="pause-539738" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-539738 event: Registered Node pause-539738 in Controller"
	I0128 04:08:22.568019       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0128 04:08:22.568116       1 taint_manager.go:211] "Sending events to api server"
	I0128 04:08:22.571510       1 shared_informer.go:280] Caches are synced for HPA
	I0128 04:08:22.574443       1 shared_informer.go:280] Caches are synced for crt configmap
	I0128 04:08:22.580871       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0128 04:08:22.583895       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0128 04:08:22.584074       1 shared_informer.go:280] Caches are synced for daemon sets
	I0128 04:08:22.588511       1 shared_informer.go:280] Caches are synced for deployment
	I0128 04:08:22.591948       1 shared_informer.go:280] Caches are synced for TTL
	I0128 04:08:22.594947       1 shared_informer.go:280] Caches are synced for stateful set
	I0128 04:08:22.604016       1 shared_informer.go:280] Caches are synced for PVC protection
	I0128 04:08:22.640046       1 shared_informer.go:280] Caches are synced for attach detach
	I0128 04:08:22.671938       1 shared_informer.go:280] Caches are synced for disruption
	I0128 04:08:22.698208       1 shared_informer.go:280] Caches are synced for resource quota
	I0128 04:08:22.720456       1 shared_informer.go:280] Caches are synced for resource quota
	I0128 04:08:22.745370       1 shared_informer.go:280] Caches are synced for ReplicationController
	I0128 04:08:23.081851       1 shared_informer.go:280] Caches are synced for garbage collector
	I0128 04:08:23.082162       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0128 04:08:23.143067       1 shared_informer.go:280] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [07f5b7ae3031] <==
	* I0128 04:08:11.217449       1 node.go:163] Successfully retrieved node IP: 192.168.61.35
	I0128 04:08:11.217539       1 server_others.go:109] "Detected node IP" address="192.168.61.35"
	I0128 04:08:11.217589       1 server_others.go:535] "Using iptables proxy"
	I0128 04:08:11.273268       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0128 04:08:11.273320       1 server_others.go:176] "Using iptables Proxier"
	I0128 04:08:11.273371       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0128 04:08:11.273835       1 server.go:655] "Version info" version="v1.26.1"
	I0128 04:08:11.273875       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 04:08:11.275438       1 config.go:317] "Starting service config controller"
	I0128 04:08:11.275480       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0128 04:08:11.275513       1 config.go:226] "Starting endpoint slice config controller"
	I0128 04:08:11.275519       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0128 04:08:11.276066       1 config.go:444] "Starting node config controller"
	I0128 04:08:11.276081       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0128 04:08:11.375822       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0128 04:08:11.375885       1 shared_informer.go:280] Caches are synced for service config
	I0128 04:08:11.376235       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-proxy [689f2394c859] <==
	* E0128 04:07:47.563911       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-539738": dial tcp 192.168.61.35:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.35:40366->192.168.61.35:8443: read: connection reset by peer
	E0128 04:07:48.598141       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-539738": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:50.951085       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-539738": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.550672       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-539738": dial tcp 192.168.61.35:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [a247c449d214] <==
	* W0128 04:07:55.231126       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.61.35:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.231170       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.35:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:55.285416       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.61.35:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.285453       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.35:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:55.298386       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.61.35:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.298434       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.61.35:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:55.638461       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.35:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.638497       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.35:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:55.936882       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.61.35:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.936966       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.61.35:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:55.945925       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.61.35:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.945969       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.61.35:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:56.025619       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.35:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:56.025805       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.35:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:56.255103       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.61.35:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:56.255267       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.61.35:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:56.265319       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.61.35:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:56.265459       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.61.35:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:56.487409       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.61.35:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:56.487503       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.61.35:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	I0128 04:07:56.762107       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0128 04:07:56.762273       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0128 04:07:56.762459       1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 04:07:56.762469       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0128 04:07:56.763008       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [a57407ff9027] <==
	* I0128 04:08:06.144840       1 serving.go:348] Generated self-signed cert in-memory
	W0128 04:08:08.802589       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0128 04:08:08.802814       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0128 04:08:08.802843       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0128 04:08:08.802940       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0128 04:08:08.832530       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0128 04:08:08.832549       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 04:08:08.840196       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0128 04:08:08.840361       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 04:08:08.841922       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0128 04:08:08.842094       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0128 04:08:08.942313       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-01-28 04:05:00 UTC, ends at Sat 2023-01-28 04:08:25 UTC. --
	Jan 28 04:08:08 pause-539738 kubelet[7772]: I0128 04:08:08.880077    7772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f0971d3d-f13f-421d-a7db-fa18ee862abb-kube-proxy\") pod \"kube-proxy-2vxmw\" (UID: \"f0971d3d-f13f-421d-a7db-fa18ee862abb\") " pod="kube-system/kube-proxy-2vxmw"
	Jan 28 04:08:08 pause-539738 kubelet[7772]: I0128 04:08:08.880098    7772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0971d3d-f13f-421d-a7db-fa18ee862abb-lib-modules\") pod \"kube-proxy-2vxmw\" (UID: \"f0971d3d-f13f-421d-a7db-fa18ee862abb\") " pod="kube-system/kube-proxy-2vxmw"
	Jan 28 04:08:08 pause-539738 kubelet[7772]: I0128 04:08:08.880120    7772 reconciler.go:41] "Reconciler: start to sync state"
	Jan 28 04:08:09 pause-539738 kubelet[7772]: I0128 04:08:09.261870    7772 kubelet_node_status.go:108] "Node was previously registered" node="pause-539738"
	Jan 28 04:08:09 pause-539738 kubelet[7772]: I0128 04:08:09.261999    7772 kubelet_node_status.go:73] "Successfully registered node" node="pause-539738"
	Jan 28 04:08:09 pause-539738 kubelet[7772]: I0128 04:08:09.263850    7772 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 28 04:08:09 pause-539738 kubelet[7772]: I0128 04:08:09.265242    7772 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 28 04:08:09 pause-539738 kubelet[7772]: E0128 04:08:09.982337    7772 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:09 pause-539738 kubelet[7772]: E0128 04:08:09.982491    7772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f0971d3d-f13f-421d-a7db-fa18ee862abb-kube-proxy podName:f0971d3d-f13f-421d-a7db-fa18ee862abb nodeName:}" failed. No retries permitted until 2023-01-28 04:08:10.482458425 +0000 UTC m=+6.858585389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f0971d3d-f13f-421d-a7db-fa18ee862abb-kube-proxy") pod "kube-proxy-2vxmw" (UID: "f0971d3d-f13f-421d-a7db-fa18ee862abb") : failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:09 pause-539738 kubelet[7772]: E0128 04:08:09.982515    7772 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:09 pause-539738 kubelet[7772]: E0128 04:08:09.982539    7772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d5d58d3-36c6-44d2-bf2d-2297c435af12-config-volume podName:9d5d58d3-36c6-44d2-bf2d-2297c435af12 nodeName:}" failed. No retries permitted until 2023-01-28 04:08:10.482532221 +0000 UTC m=+6.858659182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9d5d58d3-36c6-44d2-bf2d-2297c435af12-config-volume") pod "coredns-787d4945fb-jvdr8" (UID: "9d5d58d3-36c6-44d2-bf2d-2297c435af12") : failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: E0128 04:08:10.260802    7772 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: E0128 04:08:10.261075    7772 projected.go:198] Error preparing data for projected volume kube-api-access-98mds for pod kube-system/coredns-787d4945fb-jvdr8: failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: E0128 04:08:10.261357    7772 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: E0128 04:08:10.261404    7772 projected.go:198] Error preparing data for projected volume kube-api-access-jqpw9 for pod kube-system/kube-proxy-2vxmw: failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: E0128 04:08:10.261575    7772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d5d58d3-36c6-44d2-bf2d-2297c435af12-kube-api-access-98mds podName:9d5d58d3-36c6-44d2-bf2d-2297c435af12 nodeName:}" failed. No retries permitted until 2023-01-28 04:08:10.761365028 +0000 UTC m=+7.137491974 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-98mds" (UniqueName: "kubernetes.io/projected/9d5d58d3-36c6-44d2-bf2d-2297c435af12-kube-api-access-98mds") pod "coredns-787d4945fb-jvdr8" (UID: "9d5d58d3-36c6-44d2-bf2d-2297c435af12") : failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: E0128 04:08:10.261714    7772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0971d3d-f13f-421d-a7db-fa18ee862abb-kube-api-access-jqpw9 podName:f0971d3d-f13f-421d-a7db-fa18ee862abb nodeName:}" failed. No retries permitted until 2023-01-28 04:08:10.76169875 +0000 UTC m=+7.137825700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jqpw9" (UniqueName: "kubernetes.io/projected/f0971d3d-f13f-421d-a7db-fa18ee862abb-kube-api-access-jqpw9") pod "kube-proxy-2vxmw" (UID: "f0971d3d-f13f-421d-a7db-fa18ee862abb") : failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: I0128 04:08:10.915515    7772 scope.go:115] "RemoveContainer" containerID="689f2394c859575dfc2364323aeed3082f6bf6a03c02a86bfebf5893ace7b193"
	Jan 28 04:08:11 pause-539738 kubelet[7772]: I0128 04:08:11.702653    7772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16c544267dffcb63c0d09b5e96b77c5b7d4df254822a006bcc4ebbcbeb321c0f"
	Jan 28 04:08:13 pause-539738 kubelet[7772]: I0128 04:08:13.739420    7772 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jan 28 04:08:15 pause-539738 kubelet[7772]: I0128 04:08:15.112792    7772 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jan 28 04:08:22 pause-539738 kubelet[7772]: I0128 04:08:22.531025    7772 topology_manager.go:210] "Topology Admit Handler"
	Jan 28 04:08:22 pause-539738 kubelet[7772]: I0128 04:08:22.599180    7772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/28af396f-4ec7-455c-afe3-469c018c0197-tmp\") pod \"storage-provisioner\" (UID: \"28af396f-4ec7-455c-afe3-469c018c0197\") " pod="kube-system/storage-provisioner"
	Jan 28 04:08:22 pause-539738 kubelet[7772]: I0128 04:08:22.599270    7772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbmcm\" (UniqueName: \"kubernetes.io/projected/28af396f-4ec7-455c-afe3-469c018c0197-kube-api-access-tbmcm\") pod \"storage-provisioner\" (UID: \"28af396f-4ec7-455c-afe3-469c018c0197\") " pod="kube-system/storage-provisioner"
	Jan 28 04:08:23 pause-539738 kubelet[7772]: I0128 04:08:23.885601    7772 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.885510354 pod.CreationTimestamp="2023-01-28 04:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-28 04:08:23.883976405 +0000 UTC m=+20.260103372" watchObservedRunningTime="2023-01-28 04:08:23.885510354 +0000 UTC m=+20.261637321"
	
	* 
	* ==> storage-provisioner [134943d77a41] <==
	* I0128 04:08:23.968265       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0128 04:08:23.999481       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0128 04:08:24.000206       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0128 04:08:24.017061       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0128 04:08:24.019421       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-539738_83055290-6640-4e7a-8a08-35a811fa0d82!
	I0128 04:08:24.020588       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"afe395e0-edf0-49ca-b725-64635464d2ad", APIVersion:"v1", ResourceVersion:"522", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-539738_83055290-6640-4e7a-8a08-35a811fa0d82 became leader
	I0128 04:08:24.121244       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-539738_83055290-6640-4e7a-8a08-35a811fa0d82!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-539738 -n pause-539738
helpers_test.go:261: (dbg) Run:  kubectl --context pause-539738 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-539738 -n pause-539738
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-539738 logs -n 25
E0128 04:08:27.091780   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-539738 logs -n 25: (1.244085236s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-877541 sudo find     | cilium-877541             | jenkins | v1.28.0 | 28 Jan 23 04:04 UTC |                     |
	|         | /etc/crio -type f -exec sh -c  |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;           |                           |         |         |                     |                     |
	| ssh     | -p cilium-877541 sudo crio     | cilium-877541             | jenkins | v1.28.0 | 28 Jan 23 04:04 UTC |                     |
	|         | config                         |                           |         |         |                     |                     |
	| delete  | -p cilium-877541               | cilium-877541             | jenkins | v1.28.0 | 28 Jan 23 04:04 UTC | 28 Jan 23 04:04 UTC |
	| start   | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:04 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:04 UTC | 28 Jan 23 04:06 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| delete  | -p offline-docker-466600       | offline-docker-466600     | jenkins | v1.28.0 | 28 Jan 23 04:05 UTC | 28 Jan 23 04:05 UTC |
	| start   | -p kubernetes-upgrade-994986   | kubernetes-upgrade-994986 | jenkins | v1.28.0 | 28 Jan 23 04:05 UTC | 28 Jan 23 04:06 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| start   | -p running-upgrade-482422      | running-upgrade-482422    | jenkins | v1.28.0 | 28 Jan 23 04:05 UTC | 28 Jan 23 04:07 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:06 UTC | 28 Jan 23 04:06 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:06 UTC | 28 Jan 23 04:06 UTC |
	| start   | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:06 UTC | 28 Jan 23 04:07 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	| start   | -p pause-539738                | pause-539738              | jenkins | v1.28.0 | 28 Jan 23 04:06 UTC | 28 Jan 23 04:08 UTC |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-994986   | kubernetes-upgrade-994986 | jenkins | v1.28.0 | 28 Jan 23 04:06 UTC | 28 Jan 23 04:07 UTC |
	| start   | -p kubernetes-upgrade-994986   | kubernetes-upgrade-994986 | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:08 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-482422      | running-upgrade-482422    | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:07 UTC |
	| ssh     | -p NoKubernetes-398207 sudo    | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| profile | list                           | minikube                  | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:07 UTC |
	| profile | list --output=json             | minikube                  | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:07 UTC |
	| stop    | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:07 UTC |
	| start   | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:07 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-398207 sudo    | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-398207         | NoKubernetes-398207       | jenkins | v1.28.0 | 28 Jan 23 04:07 UTC | 28 Jan 23 04:08 UTC |
	| start   | -p kubernetes-upgrade-994986   | kubernetes-upgrade-994986 | jenkins | v1.28.0 | 28 Jan 23 04:08 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-994986   | kubernetes-upgrade-994986 | jenkins | v1.28.0 | 28 Jan 23 04:08 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-746602   | force-systemd-flag-746602 | jenkins | v1.28.0 | 28 Jan 23 04:08 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 04:08:18
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 04:08:18.049837   29125 out.go:296] Setting OutFile to fd 1 ...
	I0128 04:08:18.050033   29125 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 04:08:18.050043   29125 out.go:309] Setting ErrFile to fd 2...
	I0128 04:08:18.050047   29125 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 04:08:18.050159   29125 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3903/.minikube/bin
	I0128 04:08:18.050701   29125 out.go:303] Setting JSON to false
	I0128 04:08:18.051771   29125 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3049,"bootTime":1674875849,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0128 04:08:18.051837   29125 start.go:135] virtualization: kvm guest
	I0128 04:08:18.054429   29125 out.go:177] * [force-systemd-flag-746602] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0128 04:08:18.056140   29125 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 04:08:18.056048   29125 notify.go:220] Checking for updates...
	I0128 04:08:18.059470   29125 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 04:08:18.061698   29125 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
	I0128 04:08:18.063105   29125 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
	I0128 04:08:18.068538   29125 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0128 04:08:18.070009   29125 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 04:08:18.071858   29125 config.go:180] Loaded profile config "kubernetes-upgrade-994986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 04:08:18.071973   29125 config.go:180] Loaded profile config "pause-539738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 04:08:18.072042   29125 config.go:180] Loaded profile config "stopped-upgrade-426786": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0128 04:08:18.072088   29125 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 04:08:18.107312   29125 out.go:177] * Using the kvm2 driver based on user configuration
	I0128 04:08:18.108810   29125 start.go:296] selected driver: kvm2
	I0128 04:08:18.108826   29125 start.go:840] validating driver "kvm2" against <nil>
	I0128 04:08:18.108834   29125 start.go:851] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 04:08:18.109427   29125 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 04:08:18.109518   29125 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15565-3903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0128 04:08:18.127081   29125 install.go:137] /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2 version is 1.28.0
	I0128 04:08:18.127147   29125 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 04:08:18.127359   29125 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0128 04:08:18.127411   29125 cni.go:84] Creating CNI manager for ""
	I0128 04:08:18.127434   29125 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 04:08:18.127446   29125 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0128 04:08:18.127458   29125 start_flags.go:319] config:
	{Name:force-systemd-flag-746602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:force-systemd-flag-746602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0128 04:08:18.127576   29125 iso.go:125] acquiring lock: {Name:mkae097b889f6bf43a43f260cc80a114303c04bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 04:08:18.129789   29125 out.go:177] * Starting control plane node force-systemd-flag-746602 in cluster force-systemd-flag-746602
	I0128 04:08:18.131471   29125 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 04:08:18.131514   29125 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 04:08:18.131531   29125 cache.go:57] Caching tarball of preloaded images
	I0128 04:08:18.131612   29125 preload.go:174] Found /home/jenkins/minikube-integration/15565-3903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 04:08:18.131624   29125 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 04:08:18.131766   29125 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/force-systemd-flag-746602/config.json ...
	I0128 04:08:18.131798   29125 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/force-systemd-flag-746602/config.json: {Name:mk5335befc04e0920c98065c21e75c80618fae12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 04:08:18.131942   29125 cache.go:193] Successfully downloaded all kic artifacts
	I0128 04:08:18.131979   29125 start.go:364] acquiring machines lock for force-systemd-flag-746602: {Name:mk7ecd094a2b41dd9dbc24234c685e9f8765e635 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0128 04:08:18.132022   29125 start.go:368] acquired machines lock for "force-systemd-flag-746602" in 26.604µs
	I0128 04:08:18.132052   29125 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-746602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.26.1 ClusterName:force-systemd-flag-746602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 04:08:18.132140   29125 start.go:125] createHost starting for "" (driver="kvm2")
	I0128 04:08:16.920223   27997 pod_ready.go:102] pod "etcd-pause-539738" in "kube-system" namespace has status "Ready":"False"
	I0128 04:08:17.418837   27997 pod_ready.go:92] pod "etcd-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:17.418866   27997 pod_ready.go:81] duration metric: took 6.511830933s waiting for pod "etcd-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:17.418877   27997 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:17.423714   27997 pod_ready.go:92] pod "kube-apiserver-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:17.423733   27997 pod_ready.go:81] duration metric: took 4.846452ms waiting for pod "kube-apiserver-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:17.423741   27997 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.437255   27997 pod_ready.go:102] pod "kube-controller-manager-pause-539738" in "kube-system" namespace has status "Ready":"False"
	I0128 04:08:19.936646   27997 pod_ready.go:92] pod "kube-controller-manager-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:19.936678   27997 pod_ready.go:81] duration metric: took 2.512929249s waiting for pod "kube-controller-manager-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.936691   27997 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2vxmw" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.944738   27997 pod_ready.go:92] pod "kube-proxy-2vxmw" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:19.944761   27997 pod_ready.go:81] duration metric: took 8.062252ms waiting for pod "kube-proxy-2vxmw" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.944774   27997 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.951114   27997 pod_ready.go:92] pod "kube-scheduler-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:19.951132   27997 pod_ready.go:81] duration metric: took 6.350074ms waiting for pod "kube-scheduler-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:19.951141   27997 pod_ready.go:38] duration metric: took 9.054023106s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 04:08:19.951158   27997 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0128 04:08:19.965058   27997 ops.go:34] apiserver oom_adj: -16
	I0128 04:08:19.965079   27997 kubeadm.go:637] restartCluster took 56.576394153s
	I0128 04:08:19.965086   27997 kubeadm.go:403] StartCluster complete in 56.609465724s
	I0128 04:08:19.965103   27997 settings.go:142] acquiring lock: {Name:mkba6eafa5830ee298eee339d43ce981c09fcd93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 04:08:19.965179   27997 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15565-3903/kubeconfig
	I0128 04:08:19.966017   27997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3903/kubeconfig: {Name:mk6d09a9ae49503096fa4914dc61ac689beebb9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 04:08:19.966241   27997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0128 04:08:19.966328   27997 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0128 04:08:19.966408   27997 addons.go:65] Setting storage-provisioner=true in profile "pause-539738"
	I0128 04:08:19.966411   27997 config.go:180] Loaded profile config "pause-539738": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 04:08:19.966414   27997 addons.go:65] Setting default-storageclass=true in profile "pause-539738"
	I0128 04:08:19.966426   27997 addons.go:227] Setting addon storage-provisioner=true in "pause-539738"
	W0128 04:08:19.966434   27997 addons.go:236] addon storage-provisioner should already be in state true
	I0128 04:08:19.966439   27997 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-539738"
	I0128 04:08:19.966502   27997 host.go:66] Checking if "pause-539738" exists ...
	I0128 04:08:19.966819   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:19.966855   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:19.966856   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:19.966902   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:19.967133   27997 kapi.go:59] client config for pause-539738: &rest.Config{Host:"https://192.168.61.35:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 04:08:19.970013   27997 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-539738" context rescaled to 1 replicas
	I0128 04:08:19.970046   27997 start.go:221] Will wait 6m0s for node &{Name: IP:192.168.61.35 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 04:08:19.972097   27997 out.go:177] * Verifying Kubernetes components...
	I0128 04:08:19.973640   27997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 04:08:19.982426   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33553
	I0128 04:08:19.982827   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:19.983285   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:19.983307   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:19.983640   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:19.983915   27997 main.go:141] libmachine: (pause-539738) Calling .GetState
	I0128 04:08:19.986373   27997 kapi.go:59] client config for pause-539738: &rest.Config{Host:"https://192.168.61.35:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/profiles/pause-539738/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 04:08:19.988123   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40279
	I0128 04:08:19.988506   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:19.989003   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:19.989020   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:19.989470   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:19.989813   27997 addons.go:227] Setting addon default-storageclass=true in "pause-539738"
	W0128 04:08:19.989824   27997 addons.go:236] addon default-storageclass should already be in state true
	I0128 04:08:19.989845   27997 host.go:66] Checking if "pause-539738" exists ...
	I0128 04:08:19.990068   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:19.990081   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:19.990504   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:19.990529   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:20.010224   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0128 04:08:20.010819   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:20.011457   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:20.011479   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:20.014118   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34355
	I0128 04:08:20.014312   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:20.014460   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:20.014899   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:20.014918   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:20.015006   27997 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:20.015042   27997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:20.015420   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:20.015683   27997 main.go:141] libmachine: (pause-539738) Calling .GetState
	I0128 04:08:20.017689   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:08:20.020135   27997 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0128 04:08:16.692913   29022 machine.go:88] provisioning docker machine ...
	I0128 04:08:16.692933   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:16.693126   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetMachineName
	I0128 04:08:16.693289   29022 buildroot.go:166] provisioning hostname "kubernetes-upgrade-994986"
	I0128 04:08:16.693312   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetMachineName
	I0128 04:08:16.693475   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:16.696303   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.696779   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:16.696808   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.696951   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:16.697099   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:16.697252   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:16.697388   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:16.697558   29022 main.go:141] libmachine: Using SSH client type: native
	I0128 04:08:16.697732   29022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.83.15 22 <nil> <nil>}
	I0128 04:08:16.697751   29022 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-994986 && echo "kubernetes-upgrade-994986" | sudo tee /etc/hostname
	I0128 04:08:16.818238   29022 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-994986
	
	I0128 04:08:16.818269   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:16.821186   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.821535   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:16.821563   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.821714   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:16.821903   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:16.822101   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:16.822295   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:16.822489   29022 main.go:141] libmachine: Using SSH client type: native
	I0128 04:08:16.822679   29022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.83.15 22 <nil> <nil>}
	I0128 04:08:16.822709   29022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-994986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-994986/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-994986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 04:08:16.931763   29022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 04:08:16.931793   29022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3903/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3903/.minikube}
	I0128 04:08:16.931818   29022 buildroot.go:174] setting up certificates
	I0128 04:08:16.931838   29022 provision.go:83] configureAuth start
	I0128 04:08:16.931855   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetMachineName
	I0128 04:08:16.932133   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetIP
	I0128 04:08:16.935197   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.935667   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:16.935696   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.935883   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:16.938259   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.938619   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:16.938641   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:16.938760   29022 provision.go:138] copyHostCerts
	I0128 04:08:16.938799   29022 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3903/.minikube/ca.pem, removing ...
	I0128 04:08:16.938807   29022 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3903/.minikube/ca.pem
	I0128 04:08:16.938859   29022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3903/.minikube/ca.pem (1078 bytes)
	I0128 04:08:16.938964   29022 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3903/.minikube/cert.pem, removing ...
	I0128 04:08:16.938978   29022 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3903/.minikube/cert.pem
	I0128 04:08:16.939012   29022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3903/.minikube/cert.pem (1123 bytes)
	I0128 04:08:16.939086   29022 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3903/.minikube/key.pem, removing ...
	I0128 04:08:16.939094   29022 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3903/.minikube/key.pem
	I0128 04:08:16.939112   29022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3903/.minikube/key.pem (1679 bytes)
	I0128 04:08:16.939160   29022 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-994986 san=[192.168.83.15 192.168.83.15 localhost 127.0.0.1 minikube kubernetes-upgrade-994986]
	I0128 04:08:17.093835   29022 provision.go:172] copyRemoteCerts
	I0128 04:08:17.093903   29022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 04:08:17.093927   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.096940   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.097286   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.097323   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.097461   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.097667   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.097865   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.098060   29022 sshutil.go:53] new ssh client: &{IP:192.168.83.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/kubernetes-upgrade-994986/id_rsa Username:docker}
	I0128 04:08:17.181067   29022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0128 04:08:17.203249   29022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0128 04:08:17.228073   29022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 04:08:17.250782   29022 provision.go:86] duration metric: configureAuth took 318.925999ms
	I0128 04:08:17.250807   29022 buildroot.go:189] setting minikube options for container-runtime
	I0128 04:08:17.251016   29022 config.go:180] Loaded profile config "kubernetes-upgrade-994986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 04:08:17.251040   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:17.251270   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.253757   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.254164   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.254217   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.254341   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.254508   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.254694   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.254858   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.255027   29022 main.go:141] libmachine: Using SSH client type: native
	I0128 04:08:17.255158   29022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.83.15 22 <nil> <nil>}
	I0128 04:08:17.255171   29022 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 04:08:17.361758   29022 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0128 04:08:17.361782   29022 buildroot.go:70] root file system type: tmpfs
	I0128 04:08:17.361981   29022 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 04:08:17.362006   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.365107   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.365517   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.365566   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.365761   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.365965   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.366157   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.366312   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.366484   29022 main.go:141] libmachine: Using SSH client type: native
	I0128 04:08:17.366636   29022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.83.15 22 <nil> <nil>}
	I0128 04:08:17.366728   29022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 04:08:17.500094   29022 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 04:08:17.500129   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.503193   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.503638   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.503668   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.503945   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.504143   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.504325   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.504482   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.504654   29022 main.go:141] libmachine: Using SSH client type: native
	I0128 04:08:17.504829   29022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.83.15 22 <nil> <nil>}
	I0128 04:08:17.504854   29022 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 04:08:17.622110   29022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 04:08:17.622136   29022 machine.go:91] provisioned docker machine in 929.207043ms
	I0128 04:08:17.622150   29022 start.go:300] post-start starting for "kubernetes-upgrade-994986" (driver="kvm2")
	I0128 04:08:17.622159   29022 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 04:08:17.622185   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:17.622517   29022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 04:08:17.622551   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.625414   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.625887   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.625921   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.626139   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.626323   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.626503   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.626684   29022 sshutil.go:53] new ssh client: &{IP:192.168.83.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/kubernetes-upgrade-994986/id_rsa Username:docker}
	I0128 04:08:17.714138   29022 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 04:08:17.718403   29022 info.go:137] Remote host: Buildroot 2021.02.12
	I0128 04:08:17.718426   29022 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3903/.minikube/addons for local assets ...
	I0128 04:08:17.718489   29022 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3903/.minikube/files for local assets ...
	I0128 04:08:17.718579   29022 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3903/.minikube/files/etc/ssl/certs/110622.pem -> 110622.pem in /etc/ssl/certs
	I0128 04:08:17.718716   29022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 04:08:17.728839   29022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3903/.minikube/files/etc/ssl/certs/110622.pem --> /etc/ssl/certs/110622.pem (1708 bytes)
	I0128 04:08:17.754476   29022 start.go:303] post-start completed in 132.310579ms
	I0128 04:08:17.754495   29022 fix.go:57] fixHost completed within 1.086776105s
	I0128 04:08:17.754521   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.758345   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.758915   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.758940   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.759289   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.759473   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.759730   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.759873   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.760053   29022 main.go:141] libmachine: Using SSH client type: native
	I0128 04:08:17.760222   29022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 192.168.83.15 22 <nil> <nil>}
	I0128 04:08:17.760236   29022 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0128 04:08:17.876316   29022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1674878897.869922584
	
	I0128 04:08:17.876341   29022 fix.go:207] guest clock: 1674878897.869922584
	I0128 04:08:17.876354   29022 fix.go:220] Guest: 2023-01-28 04:08:17.869922584 +0000 UTC Remote: 2023-01-28 04:08:17.754499124 +0000 UTC m=+16.652524418 (delta=115.42346ms)
	I0128 04:08:17.876378   29022 fix.go:191] guest clock delta is within tolerance: 115.42346ms
	I0128 04:08:17.876384   29022 start.go:83] releasing machines lock for "kubernetes-upgrade-994986", held for 1.208681301s
	I0128 04:08:17.876409   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:17.876678   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetIP
	I0128 04:08:17.879559   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.879901   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.879932   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.880121   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:17.880698   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:17.880877   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .DriverName
	I0128 04:08:17.880991   29022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 04:08:17.881031   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.881111   29022 ssh_runner.go:195] Run: cat /version.json
	I0128 04:08:17.881125   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHHostname
	I0128 04:08:17.884280   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.884650   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.884679   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.884698   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.884932   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.885027   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:fd:9d", ip: ""} in network mk-kubernetes-upgrade-994986: {Iface:virbr1 ExpiryTime:2023-01-28 05:07:23 +0000 UTC Type:0 Mac:52:54:00:66:fd:9d Iaid: IPaddr:192.168.83.15 Prefix:24 Hostname:kubernetes-upgrade-994986 Clientid:01:52:54:00:66:fd:9d}
	I0128 04:08:17.885045   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) DBG | domain kubernetes-upgrade-994986 has defined IP address 192.168.83.15 and MAC address 52:54:00:66:fd:9d in network mk-kubernetes-upgrade-994986
	I0128 04:08:17.885087   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.885235   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.885318   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHPort
	I0128 04:08:17.885384   29022 sshutil.go:53] new ssh client: &{IP:192.168.83.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/kubernetes-upgrade-994986/id_rsa Username:docker}
	I0128 04:08:17.885481   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHKeyPath
	I0128 04:08:17.885617   29022 main.go:141] libmachine: (kubernetes-upgrade-994986) Calling .GetSSHUsername
	I0128 04:08:17.885768   29022 sshutil.go:53] new ssh client: &{IP:192.168.83.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/kubernetes-upgrade-994986/id_rsa Username:docker}
	W0128 04:08:17.979757   29022 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
	I0128 04:08:17.979863   29022 ssh_runner.go:195] Run: systemctl --version
	I0128 04:08:18.004370   29022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0128 04:08:18.011919   29022 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0128 04:08:18.012025   29022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 04:08:18.023993   29022 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 04:08:18.041222   29022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0128 04:08:18.051173   29022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0128 04:08:18.066118   29022 cni.go:307] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0128 04:08:18.066139   29022 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 04:08:18.066253   29022 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 04:08:18.097095   29022 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 04:08:18.097120   29022 docker.go:560] Images already preloaded, skipping extraction
	I0128 04:08:18.097130   29022 start.go:472] detecting cgroup driver to use...
	I0128 04:08:18.097261   29022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 04:08:18.117843   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 04:08:18.133993   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 04:08:18.145418   29022 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 04:08:18.145484   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 04:08:18.176708   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 04:08:18.202186   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 04:08:18.223484   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 04:08:18.240083   29022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 04:08:18.264601   29022 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 04:08:18.281983   29022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 04:08:18.300085   29022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 04:08:18.323975   29022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 04:08:18.520743   29022 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 04:08:18.540016   29022 start.go:472] detecting cgroup driver to use...
	I0128 04:08:18.540096   29022 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 04:08:18.556754   29022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0128 04:08:18.571199   29022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0128 04:08:18.596423   29022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0128 04:08:18.611640   29022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 04:08:18.623025   29022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 04:08:18.640377   29022 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 04:08:18.788949   29022 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 04:08:18.956600   29022 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 04:08:18.956631   29022 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 04:08:18.972993   29022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 04:08:19.142994   29022 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 04:08:20.021649   27997 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 04:08:20.021666   27997 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0128 04:08:20.021683   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:08:20.024866   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:08:20.025519   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:08:20.025545   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:08:20.025726   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:08:20.025894   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:08:20.026077   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:08:20.026226   27997 sshutil.go:53] new ssh client: &{IP:192.168.61.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/pause-539738/id_rsa Username:docker}
	I0128 04:08:20.033882   27997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0128 04:08:20.034220   27997 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:20.034646   27997 main.go:141] libmachine: Using API Version  1
	I0128 04:08:20.034662   27997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:20.034945   27997 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:20.035142   27997 main.go:141] libmachine: (pause-539738) Calling .GetState
	I0128 04:08:20.036874   27997 main.go:141] libmachine: (pause-539738) Calling .DriverName
	I0128 04:08:20.037194   27997 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0128 04:08:20.037218   27997 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0128 04:08:20.037237   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHHostname
	I0128 04:08:20.040521   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:08:20.041037   27997 main.go:141] libmachine: (pause-539738) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:be:42", ip: ""} in network mk-pause-539738: {Iface:virbr3 ExpiryTime:2023-01-28 05:05:04 +0000 UTC Type:0 Mac:52:54:00:a3:be:42 Iaid: IPaddr:192.168.61.35 Prefix:24 Hostname:pause-539738 Clientid:01:52:54:00:a3:be:42}
	I0128 04:08:20.041057   27997 main.go:141] libmachine: (pause-539738) DBG | domain pause-539738 has defined IP address 192.168.61.35 and MAC address 52:54:00:a3:be:42 in network mk-pause-539738
	I0128 04:08:20.041208   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHPort
	I0128 04:08:20.041364   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHKeyPath
	I0128 04:08:20.041504   27997 main.go:141] libmachine: (pause-539738) Calling .GetSSHUsername
	I0128 04:08:20.041604   27997 sshutil.go:53] new ssh client: &{IP:192.168.61.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/pause-539738/id_rsa Username:docker}
	I0128 04:08:20.139989   27997 start.go:881] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0128 04:08:20.140046   27997 node_ready.go:35] waiting up to 6m0s for node "pause-539738" to be "Ready" ...
	I0128 04:08:20.143009   27997 node_ready.go:49] node "pause-539738" has status "Ready":"True"
	I0128 04:08:20.143027   27997 node_ready.go:38] duration metric: took 2.970545ms waiting for node "pause-539738" to be "Ready" ...
	I0128 04:08:20.143034   27997 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 04:08:20.148143   27997 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-jvdr8" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:20.171291   27997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 04:08:20.194995   27997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0128 04:08:20.216817   27997 pod_ready.go:92] pod "coredns-787d4945fb-jvdr8" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:20.216845   27997 pod_ready.go:81] duration metric: took 68.682415ms waiting for pod "coredns-787d4945fb-jvdr8" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:20.216857   27997 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:20.616097   27997 pod_ready.go:92] pod "etcd-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:20.616171   27997 pod_ready.go:81] duration metric: took 399.304996ms waiting for pod "etcd-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:20.616192   27997 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:21.029453   27997 pod_ready.go:92] pod "kube-apiserver-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:21.029481   27997 pod_ready.go:81] duration metric: took 413.271931ms waiting for pod "kube-apiserver-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:21.029497   27997 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:21.585474   27997 pod_ready.go:92] pod "kube-controller-manager-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:21.585504   27997 pod_ready.go:81] duration metric: took 555.998841ms waiting for pod "kube-controller-manager-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:21.585519   27997 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2vxmw" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:22.194411   27997 pod_ready.go:92] pod "kube-proxy-2vxmw" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:22.194435   27997 pod_ready.go:81] duration metric: took 608.908313ms waiting for pod "kube-proxy-2vxmw" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:22.194447   27997 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:22.270839   27997 pod_ready.go:92] pod "kube-scheduler-pause-539738" in "kube-system" namespace has status "Ready":"True"
	I0128 04:08:22.270869   27997 pod_ready.go:81] duration metric: took 76.409295ms waiting for pod "kube-scheduler-pause-539738" in "kube-system" namespace to be "Ready" ...
	I0128 04:08:22.270881   27997 pod_ready.go:38] duration metric: took 2.127838794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 04:08:22.270907   27997 api_server.go:51] waiting for apiserver process to appear ...
	I0128 04:08:22.270958   27997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 04:08:22.523676   27997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.352347256s)
	I0128 04:08:22.523720   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.523733   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.523819   27997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.328799927s)
	I0128 04:08:22.523832   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.523840   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.523883   27997 api_server.go:71] duration metric: took 2.553813709s to wait for apiserver process to appear ...
	I0128 04:08:22.523890   27997 api_server.go:87] waiting for apiserver healthz status ...
	I0128 04:08:22.523901   27997 api_server.go:252] Checking apiserver healthz at https://192.168.61.35:8443/healthz ...
	I0128 04:08:22.527469   27997 main.go:141] libmachine: (pause-539738) DBG | Closing plugin on server side
	I0128 04:08:22.527525   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.527543   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.527552   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.527566   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.527575   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.527590   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.527579   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.527652   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.527673   27997 main.go:141] libmachine: (pause-539738) DBG | Closing plugin on server side
	I0128 04:08:22.527862   27997 main.go:141] libmachine: (pause-539738) DBG | Closing plugin on server side
	I0128 04:08:22.527907   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.527918   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.527942   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.527955   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.527971   27997 main.go:141] libmachine: Making call to close driver server
	I0128 04:08:22.527981   27997 main.go:141] libmachine: (pause-539738) Calling .Close
	I0128 04:08:22.528224   27997 main.go:141] libmachine: (pause-539738) DBG | Closing plugin on server side
	I0128 04:08:22.528266   27997 main.go:141] libmachine: Successfully made call to close driver server
	I0128 04:08:22.528281   27997 main.go:141] libmachine: Making call to close connection to plugin binary
	I0128 04:08:22.530083   27997 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0128 04:08:18.134024   29125 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0128 04:08:18.134174   29125 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/15565-3903/.minikube/bin/docker-machine-driver-kvm2
	I0128 04:08:18.134230   29125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 04:08:18.149057   29125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44709
	I0128 04:08:18.149437   29125 main.go:141] libmachine: () Calling .GetVersion
	I0128 04:08:18.150172   29125 main.go:141] libmachine: Using API Version  1
	I0128 04:08:18.150195   29125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 04:08:18.150602   29125 main.go:141] libmachine: () Calling .GetMachineName
	I0128 04:08:18.150838   29125 main.go:141] libmachine: (force-systemd-flag-746602) Calling .GetMachineName
	I0128 04:08:18.151044   29125 main.go:141] libmachine: (force-systemd-flag-746602) Calling .DriverName
	I0128 04:08:18.151265   29125 start.go:159] libmachine.API.Create for "force-systemd-flag-746602" (driver="kvm2")
	I0128 04:08:18.151303   29125 client.go:168] LocalClient.Create starting
	I0128 04:08:18.151339   29125 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3903/.minikube/certs/ca.pem
	I0128 04:08:18.151386   29125 main.go:141] libmachine: Decoding PEM data...
	I0128 04:08:18.151431   29125 main.go:141] libmachine: Parsing certificate...
	I0128 04:08:18.151521   29125 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3903/.minikube/certs/cert.pem
	I0128 04:08:18.151555   29125 main.go:141] libmachine: Decoding PEM data...
	I0128 04:08:18.151582   29125 main.go:141] libmachine: Parsing certificate...
	I0128 04:08:18.151619   29125 main.go:141] libmachine: Running pre-create checks...
	I0128 04:08:18.151636   29125 main.go:141] libmachine: (force-systemd-flag-746602) Calling .PreCreateCheck
	I0128 04:08:18.152097   29125 main.go:141] libmachine: (force-systemd-flag-746602) Calling .GetConfigRaw
	I0128 04:08:18.152608   29125 main.go:141] libmachine: Creating machine...
	I0128 04:08:18.152629   29125 main.go:141] libmachine: (force-systemd-flag-746602) Calling .Create
	I0128 04:08:18.152783   29125 main.go:141] libmachine: (force-systemd-flag-746602) Creating KVM machine...
	I0128 04:08:18.154221   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | found existing default KVM network
	I0128 04:08:18.155948   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:18.155769   29147 network.go:295] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc0001881c8] misses:0}
	I0128 04:08:18.155985   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:18.155866   29147 network.go:241] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0128 04:08:18.161375   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | trying to create private KVM network mk-force-systemd-flag-746602 192.168.39.0/24...
	I0128 04:08:18.253704   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | private KVM network mk-force-systemd-flag-746602 192.168.39.0/24 created
	I0128 04:08:18.253823   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting up store path in /home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602 ...
	I0128 04:08:18.253933   29125 main.go:141] libmachine: (force-systemd-flag-746602) Building disk image from file:///home/jenkins/minikube-integration/15565-3903/.minikube/cache/iso/amd64/minikube-v1.29.0-1674856271-15565-amd64.iso
	I0128 04:08:18.254045   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:18.253983   29147 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/15565-3903/.minikube
	I0128 04:08:18.254165   29125 main.go:141] libmachine: (force-systemd-flag-746602) Downloading /home/jenkins/minikube-integration/15565-3903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/15565-3903/.minikube/cache/iso/amd64/minikube-v1.29.0-1674856271-15565-amd64.iso...
	I0128 04:08:18.453457   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:18.453336   29147 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602/id_rsa...
	I0128 04:08:18.505183   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:18.505075   29147 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602/force-systemd-flag-746602.rawdisk...
	I0128 04:08:18.505216   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Writing magic tar header
	I0128 04:08:18.505231   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Writing SSH key tar header
	I0128 04:08:18.505250   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:18.505185   29147 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602 ...
	I0128 04:08:18.505335   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602
	I0128 04:08:18.505373   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting executable bit set on /home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602 (perms=drwx------)
	I0128 04:08:18.505386   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15565-3903/.minikube/machines
	I0128 04:08:18.505417   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15565-3903/.minikube
	I0128 04:08:18.505434   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15565-3903
	I0128 04:08:18.505451   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0128 04:08:18.505469   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home/jenkins
	I0128 04:08:18.505486   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Checking permissions on dir: /home
	I0128 04:08:18.505503   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting executable bit set on /home/jenkins/minikube-integration/15565-3903/.minikube/machines (perms=drwxrwxr-x)
	I0128 04:08:18.505518   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | Skipping /home - not owner
	I0128 04:08:18.505535   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting executable bit set on /home/jenkins/minikube-integration/15565-3903/.minikube (perms=drwxr-xr-x)
	I0128 04:08:18.505551   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting executable bit set on /home/jenkins/minikube-integration/15565-3903 (perms=drwxrwxr-x)
	I0128 04:08:18.505564   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0128 04:08:18.505578   29125 main.go:141] libmachine: (force-systemd-flag-746602) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0128 04:08:18.505593   29125 main.go:141] libmachine: (force-systemd-flag-746602) Creating domain...
	I0128 04:08:18.506767   29125 main.go:141] libmachine: (force-systemd-flag-746602) define libvirt domain using xml: 
	I0128 04:08:18.506794   29125 main.go:141] libmachine: (force-systemd-flag-746602) <domain type='kvm'>
	I0128 04:08:18.506807   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <name>force-systemd-flag-746602</name>
	I0128 04:08:18.506822   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <memory unit='MiB'>2048</memory>
	I0128 04:08:18.506833   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <vcpu>2</vcpu>
	I0128 04:08:18.506840   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <features>
	I0128 04:08:18.506846   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <acpi/>
	I0128 04:08:18.506858   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <apic/>
	I0128 04:08:18.506891   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <pae/>
	I0128 04:08:18.506915   29125 main.go:141] libmachine: (force-systemd-flag-746602)     
	I0128 04:08:18.506928   29125 main.go:141] libmachine: (force-systemd-flag-746602)   </features>
	I0128 04:08:18.506944   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <cpu mode='host-passthrough'>
	I0128 04:08:18.506956   29125 main.go:141] libmachine: (force-systemd-flag-746602)   
	I0128 04:08:18.506968   29125 main.go:141] libmachine: (force-systemd-flag-746602)   </cpu>
	I0128 04:08:18.506983   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <os>
	I0128 04:08:18.506995   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <type>hvm</type>
	I0128 04:08:18.507009   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <boot dev='cdrom'/>
	I0128 04:08:18.507019   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <boot dev='hd'/>
	I0128 04:08:18.507031   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <bootmenu enable='no'/>
	I0128 04:08:18.507040   29125 main.go:141] libmachine: (force-systemd-flag-746602)   </os>
	I0128 04:08:18.507048   29125 main.go:141] libmachine: (force-systemd-flag-746602)   <devices>
	I0128 04:08:18.507059   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <disk type='file' device='cdrom'>
	I0128 04:08:18.507077   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <source file='/home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602/boot2docker.iso'/>
	I0128 04:08:18.507111   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <target dev='hdc' bus='scsi'/>
	I0128 04:08:18.507133   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <readonly/>
	I0128 04:08:18.507147   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </disk>
	I0128 04:08:18.507158   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <disk type='file' device='disk'>
	I0128 04:08:18.507174   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0128 04:08:18.507192   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <source file='/home/jenkins/minikube-integration/15565-3903/.minikube/machines/force-systemd-flag-746602/force-systemd-flag-746602.rawdisk'/>
	I0128 04:08:18.507204   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <target dev='hda' bus='virtio'/>
	I0128 04:08:18.507216   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </disk>
	I0128 04:08:18.507227   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <interface type='network'>
	I0128 04:08:18.507239   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <source network='mk-force-systemd-flag-746602'/>
	I0128 04:08:18.507253   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <model type='virtio'/>
	I0128 04:08:18.507273   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </interface>
	I0128 04:08:18.507287   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <interface type='network'>
	I0128 04:08:18.507298   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <source network='default'/>
	I0128 04:08:18.507312   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <model type='virtio'/>
	I0128 04:08:18.507321   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </interface>
	I0128 04:08:18.507335   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <serial type='pty'>
	I0128 04:08:18.507352   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <target port='0'/>
	I0128 04:08:18.507363   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </serial>
	I0128 04:08:18.507377   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <console type='pty'>
	I0128 04:08:18.507408   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <target type='serial' port='0'/>
	I0128 04:08:18.507423   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </console>
	I0128 04:08:18.507432   29125 main.go:141] libmachine: (force-systemd-flag-746602)     <rng model='virtio'>
	I0128 04:08:18.507447   29125 main.go:141] libmachine: (force-systemd-flag-746602)       <backend model='random'>/dev/random</backend>
	I0128 04:08:18.507458   29125 main.go:141] libmachine: (force-systemd-flag-746602)     </rng>
	I0128 04:08:18.507470   29125 main.go:141] libmachine: (force-systemd-flag-746602)     
	I0128 04:08:18.507485   29125 main.go:141] libmachine: (force-systemd-flag-746602)     
	I0128 04:08:18.507501   29125 main.go:141] libmachine: (force-systemd-flag-746602)   </devices>
	I0128 04:08:18.507513   29125 main.go:141] libmachine: (force-systemd-flag-746602) </domain>
	I0128 04:08:18.507524   29125 main.go:141] libmachine: (force-systemd-flag-746602) 
	I0128 04:08:18.512069   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:b6:7f:c4 in network default
	I0128 04:08:18.512643   29125 main.go:141] libmachine: (force-systemd-flag-746602) Ensuring networks are active...
	I0128 04:08:18.512669   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:18.513466   29125 main.go:141] libmachine: (force-systemd-flag-746602) Ensuring network default is active
	I0128 04:08:18.513773   29125 main.go:141] libmachine: (force-systemd-flag-746602) Ensuring network mk-force-systemd-flag-746602 is active
	I0128 04:08:18.514437   29125 main.go:141] libmachine: (force-systemd-flag-746602) Getting domain xml...
	I0128 04:08:18.515321   29125 main.go:141] libmachine: (force-systemd-flag-746602) Creating domain...
	I0128 04:08:19.921648   29125 main.go:141] libmachine: (force-systemd-flag-746602) Waiting to get IP...
	I0128 04:08:19.922401   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:19.922814   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:19.922838   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:19.922798   29147 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0128 04:08:20.187271   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:20.187847   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:20.187880   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:20.187755   29147 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0128 04:08:20.570464   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:20.571021   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:20.571057   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:20.570986   29147 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0128 04:08:20.995680   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:20.996369   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:20.996395   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:20.996317   29147 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0128 04:08:21.470972   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:21.471651   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:21.471685   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:21.471567   29147 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0128 04:08:22.060556   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:22.061111   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:22.061158   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:22.061063   29147 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0128 04:08:22.896406   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | domain force-systemd-flag-746602 has defined MAC address 52:54:00:ff:12:6b in network mk-force-systemd-flag-746602
	I0128 04:08:22.896868   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | unable to find current IP address of domain force-systemd-flag-746602 in network mk-force-systemd-flag-746602
	I0128 04:08:22.896899   29125 main.go:141] libmachine: (force-systemd-flag-746602) DBG | I0128 04:08:22.896811   29147 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0128 04:08:22.531478   27997 addons.go:488] enableAddons completed in 2.565153495s
	I0128 04:08:22.536235   27997 api_server.go:278] https://192.168.61.35:8443/healthz returned 200:
	ok
	I0128 04:08:22.546519   27997 api_server.go:140] control plane version: v1.26.1
	I0128 04:08:22.546542   27997 api_server.go:130] duration metric: took 22.645385ms to wait for apiserver health ...
	I0128 04:08:22.546567   27997 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 04:08:22.566601   27997 system_pods.go:59] 7 kube-system pods found
	I0128 04:08:22.566636   27997 system_pods.go:61] "coredns-787d4945fb-jvdr8" [9d5d58d3-36c6-44d2-bf2d-2297c435af12] Running
	I0128 04:08:22.566645   27997 system_pods.go:61] "etcd-pause-539738" [4e925a1f-e8e7-463f-9ca5-30f3bcf9e034] Running
	I0128 04:08:22.566652   27997 system_pods.go:61] "kube-apiserver-pause-539738" [b89c18b3-bea5-480d-8059-6f1909701f9b] Running
	I0128 04:08:22.566665   27997 system_pods.go:61] "kube-controller-manager-pause-539738" [6a7def17-49f7-49d3-9bc6-94c176e59887] Running
	I0128 04:08:22.566743   27997 system_pods.go:61] "kube-proxy-2vxmw" [f0971d3d-f13f-421d-a7db-fa18ee862abb] Running
	I0128 04:08:22.566750   27997 system_pods.go:61] "kube-scheduler-pause-539738" [bf3dd75f-9d11-4088-8afc-6e0200586918] Running
	I0128 04:08:22.566757   27997 system_pods.go:61] "storage-provisioner" [28af396f-4ec7-455c-afe3-469c018c0197] Pending
	I0128 04:08:22.566764   27997 system_pods.go:74] duration metric: took 20.191146ms to wait for pod list to return data ...
	I0128 04:08:22.566780   27997 default_sa.go:34] waiting for default service account to be created ...
	I0128 04:08:22.620765   27997 default_sa.go:45] found service account: "default"
	I0128 04:08:22.620791   27997 default_sa.go:55] duration metric: took 54.004254ms for default service account to be created ...
	I0128 04:08:22.620801   27997 system_pods.go:116] waiting for k8s-apps to be running ...
	I0128 04:08:22.820897   27997 system_pods.go:86] 7 kube-system pods found
	I0128 04:08:22.820980   27997 system_pods.go:89] "coredns-787d4945fb-jvdr8" [9d5d58d3-36c6-44d2-bf2d-2297c435af12] Running
	I0128 04:08:22.820994   27997 system_pods.go:89] "etcd-pause-539738" [4e925a1f-e8e7-463f-9ca5-30f3bcf9e034] Running
	I0128 04:08:22.821001   27997 system_pods.go:89] "kube-apiserver-pause-539738" [b89c18b3-bea5-480d-8059-6f1909701f9b] Running
	I0128 04:08:22.821009   27997 system_pods.go:89] "kube-controller-manager-pause-539738" [6a7def17-49f7-49d3-9bc6-94c176e59887] Running
	I0128 04:08:22.821026   27997 system_pods.go:89] "kube-proxy-2vxmw" [f0971d3d-f13f-421d-a7db-fa18ee862abb] Running
	I0128 04:08:22.821033   27997 system_pods.go:89] "kube-scheduler-pause-539738" [bf3dd75f-9d11-4088-8afc-6e0200586918] Running
	I0128 04:08:22.821048   27997 system_pods.go:89] "storage-provisioner" [28af396f-4ec7-455c-afe3-469c018c0197] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0128 04:08:22.821061   27997 system_pods.go:126] duration metric: took 200.254117ms to wait for k8s-apps to be running ...
	I0128 04:08:22.821072   27997 system_svc.go:44] waiting for kubelet service to be running ....
	I0128 04:08:22.821120   27997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 04:08:22.836608   27997 system_svc.go:56] duration metric: took 15.525635ms WaitForService to wait for kubelet.
	I0128 04:08:22.836632   27997 kubeadm.go:578] duration metric: took 2.866561868s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0128 04:08:22.836651   27997 node_conditions.go:102] verifying NodePressure condition ...
	I0128 04:08:23.017898   27997 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0128 04:08:23.017942   27997 node_conditions.go:123] node cpu capacity is 2
	I0128 04:08:23.017956   27997 node_conditions.go:105] duration metric: took 181.298919ms to run NodePressure ...
	I0128 04:08:23.017971   27997 start.go:226] waiting for startup goroutines ...
	I0128 04:08:23.018318   27997 ssh_runner.go:195] Run: rm -f paused
	I0128 04:08:23.106272   27997 start.go:538] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0128 04:08:23.108522   27997 out.go:177] * Done! kubectl is now configured to use "pause-539738" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-01-28 04:05:00 UTC, ends at Sat 2023-01-28 04:08:27 UTC. --
	Jan 28 04:08:05 pause-539738 dockerd[5359]: time="2023-01-28T04:08:05.218668258Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/28d6d166486a139da399ace5235173174b30a7fea42852988138278879272e63 pid=7948 runtime=io.containerd.runc.v2
	Jan 28 04:08:05 pause-539738 dockerd[5359]: time="2023-01-28T04:08:05.219158098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:08:05 pause-539738 dockerd[5359]: time="2023-01-28T04:08:05.219250357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:08:05 pause-539738 dockerd[5359]: time="2023-01-28T04:08:05.219271823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:08:05 pause-539738 dockerd[5359]: time="2023-01-28T04:08:05.219392373Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a57407ff9027105bc7270d3144d4687dd0cc4b00c60a446a5251ee1aba3137f2 pid=7958 runtime=io.containerd.runc.v2
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.054027780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.054287207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.054300203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.055468854Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/07f5b7ae3031e496dd4adb87df4c3504050e3ae8ad1f880c3c8ac1146edceb11 pid=8116 runtime=io.containerd.runc.v2
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.072042100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.072130150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.072141935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.072698555Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/16c544267dffcb63c0d09b5e96b77c5b7d4df254822a006bcc4ebbcbeb321c0f pid=8131 runtime=io.containerd.runc.v2
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.849050130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.849182209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.849208479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:08:11 pause-539738 dockerd[5359]: time="2023-01-28T04:08:11.849664772Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b6f504145d4562aee4ad585627a09297af2a971605b2060b9f2d20c903ba8876 pid=8313 runtime=io.containerd.runc.v2
	Jan 28 04:08:22 pause-539738 dockerd[5359]: time="2023-01-28T04:08:22.990275068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:08:22 pause-539738 dockerd[5359]: time="2023-01-28T04:08:22.990355281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:08:22 pause-539738 dockerd[5359]: time="2023-01-28T04:08:22.990369559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:08:22 pause-539738 dockerd[5359]: time="2023-01-28T04:08:22.991196497Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/afad6178a4e892e3bf961bb8dbba33ae9d8de4014d4b952f0758349d71fc45a7 pid=8549 runtime=io.containerd.runc.v2
	Jan 28 04:08:23 pause-539738 dockerd[5359]: time="2023-01-28T04:08:23.726919019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:08:23 pause-539738 dockerd[5359]: time="2023-01-28T04:08:23.727102052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:08:23 pause-539738 dockerd[5359]: time="2023-01-28T04:08:23.727117818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:08:23 pause-539738 dockerd[5359]: time="2023-01-28T04:08:23.727993188Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/134943d77a41e1c9c63e040a331c6acbd016d64079843e1ad94b581734bf60f0 pid=8601 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	134943d77a41e       6e38f40d628db       4 seconds ago        Running             storage-provisioner       0                   afad6178a4e89
	b6f504145d456       5185b96f0becf       16 seconds ago       Running             coredns                   2                   16c544267dffc
	07f5b7ae3031e       46a6bb3c77ce0       17 seconds ago       Running             kube-proxy                3                   eadfcd7aabadf
	a57407ff90271       655493523f607       23 seconds ago       Running             kube-scheduler            2                   1015b60a11e04
	28d6d166486a1       fce326961ae2d       23 seconds ago       Running             etcd                      3                   7c05d806c6cec
	d7554f64ab0e3       e9c08e11b07f6       27 seconds ago       Running             kube-controller-manager   2                   c29f61afeb3d7
	4dde87c760c48       deb04688c4a35       28 seconds ago       Running             kube-apiserver            3                   3da0b0f6c1f51
	f4d02970c201c       fce326961ae2d       45 seconds ago       Exited              etcd                      2                   b1adfd0dc97e3
	689f2394c8595       46a6bb3c77ce0       46 seconds ago       Exited              kube-proxy                2                   4cbfacd312e56
	ecd079acd243b       5185b96f0becf       About a minute ago   Exited              coredns                   1                   29628336e08f4
	a247c449d214a       655493523f607       About a minute ago   Exited              kube-scheduler            1                   ae418143b0c22
	7a3e62c8e65a3       deb04688c4a35       About a minute ago   Exited              kube-apiserver            2                   3c2eca72a2a1d
	be6ac4b353504       e9c08e11b07f6       About a minute ago   Exited              kube-controller-manager   1                   f537afa7d5fe6
	
	* 
	* ==> coredns [b6f504145d45] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:49720 - 3640 "HINFO IN 7174251185602643581.328645765898013938. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021702169s
	
	* 
	* ==> coredns [ecd079acd243] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:52201 - 59003 "HINFO IN 7374916446888445961.4050626325103425631. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029908073s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-539738
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-539738
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a22b9432724c1a7c0bfc1f92a18db163006c245
	                    minikube.k8s.io/name=pause-539738
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_28T04_05_47_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 28 Jan 2023 04:05:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-539738
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 28 Jan 2023 04:08:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 28 Jan 2023 04:08:09 +0000   Sat, 28 Jan 2023 04:05:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 28 Jan 2023 04:08:09 +0000   Sat, 28 Jan 2023 04:05:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 28 Jan 2023 04:08:09 +0000   Sat, 28 Jan 2023 04:05:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 28 Jan 2023 04:08:09 +0000   Sat, 28 Jan 2023 04:05:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.35
	  Hostname:    pause-539738
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e00382430134c4f8b57880d028c449b
	  System UUID:                7e003824-3013-4c4f-8b57-880d028c449b
	  Boot ID:                    5555b58d-bd4c-415b-8db1-9d0778132685
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-jvdr8                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m28s
	  kube-system                 etcd-pause-539738                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-apiserver-pause-539738             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-controller-manager-pause-539738    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-proxy-2vxmw                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-scheduler-pause-539738             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m25s              kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 2m41s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m40s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m40s              kubelet          Node pause-539738 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m40s              kubelet          Node pause-539738 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m40s              kubelet          Node pause-539738 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                2m35s              kubelet          Node pause-539738 status is now: NodeReady
	  Normal  RegisteredNode           2m29s              node-controller  Node pause-539738 event: Registered Node pause-539738 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 24s)  kubelet          Node pause-539738 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 24s)  kubelet          Node pause-539738 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 24s)  kubelet          Node pause-539738 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node pause-539738 event: Registered Node pause-539738 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.003769] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.440171] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +0.293151] systemd-fstab-generator[936]: Ignoring "noauto" for root device
	[  +0.138510] systemd-fstab-generator[947]: Ignoring "noauto" for root device
	[  +0.155713] systemd-fstab-generator[960]: Ignoring "noauto" for root device
	[  +1.659489] systemd-fstab-generator[1107]: Ignoring "noauto" for root device
	[  +0.186554] systemd-fstab-generator[1118]: Ignoring "noauto" for root device
	[  +0.158164] systemd-fstab-generator[1129]: Ignoring "noauto" for root device
	[  +0.136901] systemd-fstab-generator[1140]: Ignoring "noauto" for root device
	[  +5.553840] systemd-fstab-generator[1388]: Ignoring "noauto" for root device
	[  +1.018295] kauditd_printk_skb: 68 callbacks suppressed
	[ +12.937235] systemd-fstab-generator[2413]: Ignoring "noauto" for root device
	[Jan28 04:06] kauditd_printk_skb: 8 callbacks suppressed
	[ +10.690783] kauditd_printk_skb: 26 callbacks suppressed
	[Jan28 04:07] systemd-fstab-generator[4580]: Ignoring "noauto" for root device
	[  +0.249508] systemd-fstab-generator[4610]: Ignoring "noauto" for root device
	[  +0.187537] systemd-fstab-generator[4621]: Ignoring "noauto" for root device
	[  +0.203657] systemd-fstab-generator[4650]: Ignoring "noauto" for root device
	[  +9.772707] systemd-fstab-generator[5759]: Ignoring "noauto" for root device
	[  +0.138072] systemd-fstab-generator[5777]: Ignoring "noauto" for root device
	[  +0.133438] systemd-fstab-generator[5804]: Ignoring "noauto" for root device
	[  +0.114025] systemd-fstab-generator[5815]: Ignoring "noauto" for root device
	[  +1.191393] kauditd_printk_skb: 34 callbacks suppressed
	[Jan28 04:08] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.711051] systemd-fstab-generator[7766]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [28d6d166486a] <==
	* {"level":"info","ts":"2023-01-28T04:08:16.661Z","caller":"traceutil/trace.go:171","msg":"trace[2092030425] range","detail":"{range_begin:/registry/minions/pause-539738; range_end:; response_count:1; response_revision:493; }","duration":"188.603075ms","start":"2023-01-28T04:08:16.472Z","end":"2023-01-28T04:08:16.661Z","steps":["trace[2092030425] 'range keys from in-memory index tree'  (duration: 187.405043ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-28T04:08:21.575Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"157.589939ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10495786225604958759 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" value_size:641 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-01-28T04:08:21.575Z","caller":"traceutil/trace.go:171","msg":"trace[180746045] linearizableReadLoop","detail":"{readStateIndex:553; appliedIndex:552; }","duration":"225.160028ms","start":"2023-01-28T04:08:21.350Z","end":"2023-01-28T04:08:21.575Z","steps":["trace[180746045] 'read index received'  (duration: 67.452876ms)","trace[180746045] 'applied index is now lower than readState.Index'  (duration: 157.706304ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-28T04:08:21.576Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"168.339002ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-539738\" ","response":"range_response_count:1 size:4488"}
	{"level":"info","ts":"2023-01-28T04:08:21.576Z","caller":"traceutil/trace.go:171","msg":"trace[1674861722] range","detail":"{range_begin:/registry/minions/pause-539738; range_end:; response_count:1; response_revision:503; }","duration":"168.400561ms","start":"2023-01-28T04:08:21.407Z","end":"2023-01-28T04:08:21.576Z","steps":["trace[1674861722] 'agreement among raft nodes before linearized reading'  (duration: 168.279728ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-28T04:08:21.577Z","caller":"traceutil/trace.go:171","msg":"trace[1746340332] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"257.498052ms","start":"2023-01-28T04:08:21.319Z","end":"2023-01-28T04:08:21.577Z","steps":["trace[1746340332] 'process raft request'  (duration: 98.427552ms)","trace[1746340332] 'compare'  (duration: 157.502381ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-28T04:08:21.577Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"227.060657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-01-28T04:08:21.579Z","caller":"traceutil/trace.go:171","msg":"trace[1333583828] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:503; }","duration":"229.205222ms","start":"2023-01-28T04:08:21.350Z","end":"2023-01-28T04:08:21.579Z","steps":["trace[1333583828] 'agreement among raft nodes before linearized reading'  (duration: 225.338908ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-28T04:08:22.178Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"129.414872ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10495786225604958763 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/storage-provisioner\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/storage-provisioner\" value_size:1073 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-01-28T04:08:22.178Z","caller":"traceutil/trace.go:171","msg":"trace[306829421] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"593.78677ms","start":"2023-01-28T04:08:21.584Z","end":"2023-01-28T04:08:22.178Z","steps":["trace[306829421] 'process raft request'  (duration: 463.865421ms)","trace[306829421] 'compare'  (duration: 129.336372ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-28T04:08:22.178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-28T04:08:21.584Z","time spent":"593.889869ms","remote":"127.0.0.1:52650","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1130,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/clusterrolebindings/storage-provisioner\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/storage-provisioner\" value_size:1073 >> failure:<>"}
	{"level":"info","ts":"2023-01-28T04:08:22.178Z","caller":"traceutil/trace.go:171","msg":"trace[369834852] linearizableReadLoop","detail":"{readStateIndex:554; appliedIndex:553; }","duration":"592.935062ms","start":"2023-01-28T04:08:21.585Z","end":"2023-01-28T04:08:22.178Z","steps":["trace[369834852] 'read index received'  (duration: 462.989395ms)","trace[369834852] 'applied index is now lower than readState.Index'  (duration: 129.943975ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-28T04:08:22.179Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"570.627526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-2vxmw\" ","response":"range_response_count:1 size:4540"}
	{"level":"info","ts":"2023-01-28T04:08:22.179Z","caller":"traceutil/trace.go:171","msg":"trace[308989808] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-2vxmw; range_end:; response_count:1; response_revision:504; }","duration":"570.681436ms","start":"2023-01-28T04:08:21.608Z","end":"2023-01-28T04:08:22.179Z","steps":["trace[308989808] 'agreement among raft nodes before linearized reading'  (duration: 570.53031ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-28T04:08:22.179Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-28T04:08:21.608Z","time spent":"570.718631ms","remote":"127.0.0.1:52604","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4564,"request content":"key:\"/registry/pods/kube-system/kube-proxy-2vxmw\" "}
	{"level":"warn","ts":"2023-01-28T04:08:22.179Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"519.566846ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-01-28T04:08:22.179Z","caller":"traceutil/trace.go:171","msg":"trace[142877402] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:504; }","duration":"519.605723ms","start":"2023-01-28T04:08:21.659Z","end":"2023-01-28T04:08:22.179Z","steps":["trace[142877402] 'agreement among raft nodes before linearized reading'  (duration: 519.554981ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-28T04:08:22.179Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-28T04:08:21.659Z","time spent":"519.692591ms","remote":"127.0.0.1:52616","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-01-28T04:08:22.182Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"597.162555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-01-28T04:08:22.183Z","caller":"traceutil/trace.go:171","msg":"trace[1078778854] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:504; }","duration":"597.834137ms","start":"2023-01-28T04:08:21.585Z","end":"2023-01-28T04:08:22.183Z","steps":["trace[1078778854] 'agreement among raft nodes before linearized reading'  (duration: 593.021189ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-28T04:08:22.183Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-28T04:08:21.585Z","time spent":"598.11911ms","remote":"127.0.0.1:52606","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":231,"request content":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" "}
	{"level":"info","ts":"2023-01-28T04:08:22.439Z","caller":"traceutil/trace.go:171","msg":"trace[71657755] linearizableReadLoop","detail":"{readStateIndex:556; appliedIndex:555; }","duration":"159.073405ms","start":"2023-01-28T04:08:22.280Z","end":"2023-01-28T04:08:22.439Z","steps":["trace[71657755] 'read index received'  (duration: 118.473492ms)","trace[71657755] 'applied index is now lower than readState.Index'  (duration: 40.599279ms)"],"step_count":2}
	{"level":"info","ts":"2023-01-28T04:08:22.439Z","caller":"traceutil/trace.go:171","msg":"trace[981654493] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"162.174517ms","start":"2023-01-28T04:08:22.277Z","end":"2023-01-28T04:08:22.439Z","steps":["trace[981654493] 'process raft request'  (duration: 121.217238ms)","trace[981654493] 'compare'  (duration: 40.233397ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-28T04:08:22.440Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"160.273309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2023-01-28T04:08:22.440Z","caller":"traceutil/trace.go:171","msg":"trace[692385285] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:506; }","duration":"160.345071ms","start":"2023-01-28T04:08:22.280Z","end":"2023-01-28T04:08:22.440Z","steps":["trace[692385285] 'agreement among raft nodes before linearized reading'  (duration: 159.502165ms)"],"step_count":1}
	
	* 
	* ==> etcd [f4d02970c201] <==
	* {"level":"info","ts":"2023-01-28T04:07:43.462Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-28T04:07:43.462Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"feba1a131c3b91a8","initial-advertise-peer-urls":["https://192.168.61.35:2380"],"listen-peer-urls":["https://192.168.61.35:2380"],"advertise-client-urls":["https://192.168.61.35:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.35:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-28T04:07:43.462Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-28T04:07:43.462Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.61.35:2380"}
	{"level":"info","ts":"2023-01-28T04:07:43.462Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.35:2380"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feba1a131c3b91a8 is starting a new election at term 3"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feba1a131c3b91a8 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feba1a131c3b91a8 received MsgPreVoteResp from feba1a131c3b91a8 at term 3"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feba1a131c3b91a8 became candidate at term 4"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feba1a131c3b91a8 received MsgVoteResp from feba1a131c3b91a8 at term 4"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feba1a131c3b91a8 became leader at term 4"}
	{"level":"info","ts":"2023-01-28T04:07:44.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: feba1a131c3b91a8 elected leader feba1a131c3b91a8 at term 4"}
	{"level":"info","ts":"2023-01-28T04:07:44.546Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"feba1a131c3b91a8","local-member-attributes":"{Name:pause-539738 ClientURLs:[https://192.168.61.35:2379]}","request-path":"/0/members/feba1a131c3b91a8/attributes","cluster-id":"960419a4944238d5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-28T04:07:44.546Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T04:07:44.546Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T04:07:44.548Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.35:2379"}
	{"level":"info","ts":"2023-01-28T04:07:44.549Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-28T04:07:44.549Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-28T04:07:44.549Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-28T04:07:56.797Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-01-28T04:07:56.797Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-539738","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.35:2380"],"advertise-client-urls":["https://192.168.61.35:2379"]}
	{"level":"info","ts":"2023-01-28T04:07:56.801Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"feba1a131c3b91a8","current-leader-member-id":"feba1a131c3b91a8"}
	{"level":"info","ts":"2023-01-28T04:07:56.804Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.61.35:2380"}
	{"level":"info","ts":"2023-01-28T04:07:56.805Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.61.35:2380"}
	{"level":"info","ts":"2023-01-28T04:07:56.805Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-539738","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.35:2380"],"advertise-client-urls":["https://192.168.61.35:2379"]}
	
	* 
	* ==> kernel <==
	*  04:08:27 up 3 min,  0 users,  load average: 2.40, 1.00, 0.39
	Linux pause-539738 5.10.57 #1 SMP Sat Jan 28 02:15:18 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4dde87c760c4] <==
	* I0128 04:08:08.911224       1 cache.go:39] Caches are synced for autoregister controller
	I0128 04:08:08.911504       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0128 04:08:08.912003       1 shared_informer.go:280] Caches are synced for configmaps
	I0128 04:08:08.913564       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0128 04:08:08.913576       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0128 04:08:08.914077       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0128 04:08:09.428559       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0128 04:08:09.718671       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0128 04:08:10.731127       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0128 04:08:10.751795       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0128 04:08:10.814948       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0128 04:08:10.860919       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0128 04:08:10.870327       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0128 04:08:22.180103       1 trace.go:219] Trace[550369369]: "Create" accept:application/json,audit-id:3111c81f-ae52-4c3c-8730-881fcf0136aa,client:127.0.0.1,protocol:HTTP/2.0,resource:clusterrolebindings,scope:resource,url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings,user-agent:kubectl/v1.26.1 (linux/amd64) kubernetes/8f94681,verb:POST (28-Jan-2023 04:08:21.583) (total time: 596ms):
	Trace[550369369]: ["Create etcd3" audit-id:3111c81f-ae52-4c3c-8730-881fcf0136aa,key:/clusterrolebindings/storage-provisioner,type:*rbac.ClusterRoleBinding,resource:clusterrolebindings.rbac.authorization.k8s.io 595ms (04:08:21.584)
	Trace[550369369]:  ---"Txn call succeeded" 595ms (04:08:22.179)]
	Trace[550369369]: [596.064806ms] [596.064806ms] END
	I0128 04:08:22.182118       1 trace.go:219] Trace[1034241223]: "Get" accept:application/json, */*,audit-id:740a3e31-1f59-461a-9d83-fe2bd4ba4623,client:192.168.61.1,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-2vxmw,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (28-Jan-2023 04:08:21.607) (total time: 574ms):
	Trace[1034241223]: ---"About to write a response" 572ms (04:08:22.180)
	Trace[1034241223]: [574.421356ms] [574.421356ms] END
	I0128 04:08:22.185810       1 trace.go:219] Trace[2033090596]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:b0817005-4ac4-45fd-b0f2-e4e3f0367c68,client:192.168.61.35,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/deployment-controller,user-agent:kube-controller-manager/v1.26.1 (linux/amd64) kubernetes/8f94681/kube-controller-manager,verb:GET (28-Jan-2023 04:08:21.584) (total time: 600ms):
	Trace[2033090596]: ---"About to write a response" 600ms (04:08:22.185)
	Trace[2033090596]: [600.936521ms] [600.936521ms] END
	I0128 04:08:22.448820       1 controller.go:615] quota admission added evaluator for: endpoints
	I0128 04:08:22.565271       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [7a3e62c8e65a] <==
	* W0128 04:07:37.282358       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0128 04:07:42.285431       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0128 04:07:42.398490       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0128 04:07:46.535702       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-controller-manager [be6ac4b35350] <==
	* I0128 04:07:26.497184       1 serving.go:348] Generated self-signed cert in-memory
	I0128 04:07:26.975139       1 controllermanager.go:182] Version: v1.26.1
	I0128 04:07:26.975179       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 04:07:26.978587       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0128 04:07:26.979626       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0128 04:07:26.979835       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0128 04:07:26.979919       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	F0128 04:07:47.540500       1 controllermanager.go:228] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.61.35:8443/healthz": dial tcp 192.168.61.35:8443: connect: connection refused
	
	* 
	* ==> kube-controller-manager [d7554f64ab0e] <==
	* I0128 04:08:22.557808       1 shared_informer.go:280] Caches are synced for job
	I0128 04:08:22.567163       1 shared_informer.go:280] Caches are synced for taint
	I0128 04:08:22.567400       1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone: 
	W0128 04:08:22.567641       1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-539738. Assuming now as a timestamp.
	I0128 04:08:22.567717       1 node_lifecycle_controller.go:1254] Controller detected that zone  is now in state Normal.
	I0128 04:08:22.567887       1 event.go:294] "Event occurred" object="pause-539738" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-539738 event: Registered Node pause-539738 in Controller"
	I0128 04:08:22.568019       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0128 04:08:22.568116       1 taint_manager.go:211] "Sending events to api server"
	I0128 04:08:22.571510       1 shared_informer.go:280] Caches are synced for HPA
	I0128 04:08:22.574443       1 shared_informer.go:280] Caches are synced for crt configmap
	I0128 04:08:22.580871       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0128 04:08:22.583895       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0128 04:08:22.584074       1 shared_informer.go:280] Caches are synced for daemon sets
	I0128 04:08:22.588511       1 shared_informer.go:280] Caches are synced for deployment
	I0128 04:08:22.591948       1 shared_informer.go:280] Caches are synced for TTL
	I0128 04:08:22.594947       1 shared_informer.go:280] Caches are synced for stateful set
	I0128 04:08:22.604016       1 shared_informer.go:280] Caches are synced for PVC protection
	I0128 04:08:22.640046       1 shared_informer.go:280] Caches are synced for attach detach
	I0128 04:08:22.671938       1 shared_informer.go:280] Caches are synced for disruption
	I0128 04:08:22.698208       1 shared_informer.go:280] Caches are synced for resource quota
	I0128 04:08:22.720456       1 shared_informer.go:280] Caches are synced for resource quota
	I0128 04:08:22.745370       1 shared_informer.go:280] Caches are synced for ReplicationController
	I0128 04:08:23.081851       1 shared_informer.go:280] Caches are synced for garbage collector
	I0128 04:08:23.082162       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0128 04:08:23.143067       1 shared_informer.go:280] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [07f5b7ae3031] <==
	* I0128 04:08:11.217449       1 node.go:163] Successfully retrieved node IP: 192.168.61.35
	I0128 04:08:11.217539       1 server_others.go:109] "Detected node IP" address="192.168.61.35"
	I0128 04:08:11.217589       1 server_others.go:535] "Using iptables proxy"
	I0128 04:08:11.273268       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0128 04:08:11.273320       1 server_others.go:176] "Using iptables Proxier"
	I0128 04:08:11.273371       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0128 04:08:11.273835       1 server.go:655] "Version info" version="v1.26.1"
	I0128 04:08:11.273875       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 04:08:11.275438       1 config.go:317] "Starting service config controller"
	I0128 04:08:11.275480       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0128 04:08:11.275513       1 config.go:226] "Starting endpoint slice config controller"
	I0128 04:08:11.275519       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0128 04:08:11.276066       1 config.go:444] "Starting node config controller"
	I0128 04:08:11.276081       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0128 04:08:11.375822       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0128 04:08:11.375885       1 shared_informer.go:280] Caches are synced for service config
	I0128 04:08:11.376235       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-proxy [689f2394c859] <==
	* E0128 04:07:47.563911       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-539738": dial tcp 192.168.61.35:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.35:40366->192.168.61.35:8443: read: connection reset by peer
	E0128 04:07:48.598141       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-539738": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:50.951085       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-539738": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.550672       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-539738": dial tcp 192.168.61.35:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [a247c449d214] <==
	* W0128 04:07:55.231126       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.61.35:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.231170       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.35:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:55.285416       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.61.35:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.285453       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.35:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:55.298386       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.61.35:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.298434       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.61.35:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:55.638461       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.35:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.638497       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.35:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:55.936882       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.61.35:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.936966       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.61.35:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:55.945925       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.61.35:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:55.945969       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.61.35:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:56.025619       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.35:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:56.025805       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.35:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:56.255103       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.61.35:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:56.255267       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.61.35:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:56.265319       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.61.35:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:56.265459       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.61.35:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	W0128 04:07:56.487409       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.61.35:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	E0128 04:07:56.487503       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.61.35:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.35:8443: connect: connection refused
	I0128 04:07:56.762107       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0128 04:07:56.762273       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0128 04:07:56.762459       1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 04:07:56.762469       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0128 04:07:56.763008       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [a57407ff9027] <==
	* I0128 04:08:06.144840       1 serving.go:348] Generated self-signed cert in-memory
	W0128 04:08:08.802589       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0128 04:08:08.802814       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0128 04:08:08.802843       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0128 04:08:08.802940       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0128 04:08:08.832530       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0128 04:08:08.832549       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 04:08:08.840196       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0128 04:08:08.840361       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 04:08:08.841922       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0128 04:08:08.842094       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0128 04:08:08.942313       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-01-28 04:05:00 UTC, ends at Sat 2023-01-28 04:08:27 UTC. --
	Jan 28 04:08:08 pause-539738 kubelet[7772]: I0128 04:08:08.880077    7772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f0971d3d-f13f-421d-a7db-fa18ee862abb-kube-proxy\") pod \"kube-proxy-2vxmw\" (UID: \"f0971d3d-f13f-421d-a7db-fa18ee862abb\") " pod="kube-system/kube-proxy-2vxmw"
	Jan 28 04:08:08 pause-539738 kubelet[7772]: I0128 04:08:08.880098    7772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0971d3d-f13f-421d-a7db-fa18ee862abb-lib-modules\") pod \"kube-proxy-2vxmw\" (UID: \"f0971d3d-f13f-421d-a7db-fa18ee862abb\") " pod="kube-system/kube-proxy-2vxmw"
	Jan 28 04:08:08 pause-539738 kubelet[7772]: I0128 04:08:08.880120    7772 reconciler.go:41] "Reconciler: start to sync state"
	Jan 28 04:08:09 pause-539738 kubelet[7772]: I0128 04:08:09.261870    7772 kubelet_node_status.go:108] "Node was previously registered" node="pause-539738"
	Jan 28 04:08:09 pause-539738 kubelet[7772]: I0128 04:08:09.261999    7772 kubelet_node_status.go:73] "Successfully registered node" node="pause-539738"
	Jan 28 04:08:09 pause-539738 kubelet[7772]: I0128 04:08:09.263850    7772 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 28 04:08:09 pause-539738 kubelet[7772]: I0128 04:08:09.265242    7772 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 28 04:08:09 pause-539738 kubelet[7772]: E0128 04:08:09.982337    7772 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:09 pause-539738 kubelet[7772]: E0128 04:08:09.982491    7772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f0971d3d-f13f-421d-a7db-fa18ee862abb-kube-proxy podName:f0971d3d-f13f-421d-a7db-fa18ee862abb nodeName:}" failed. No retries permitted until 2023-01-28 04:08:10.482458425 +0000 UTC m=+6.858585389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f0971d3d-f13f-421d-a7db-fa18ee862abb-kube-proxy") pod "kube-proxy-2vxmw" (UID: "f0971d3d-f13f-421d-a7db-fa18ee862abb") : failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:09 pause-539738 kubelet[7772]: E0128 04:08:09.982515    7772 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:09 pause-539738 kubelet[7772]: E0128 04:08:09.982539    7772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d5d58d3-36c6-44d2-bf2d-2297c435af12-config-volume podName:9d5d58d3-36c6-44d2-bf2d-2297c435af12 nodeName:}" failed. No retries permitted until 2023-01-28 04:08:10.482532221 +0000 UTC m=+6.858659182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9d5d58d3-36c6-44d2-bf2d-2297c435af12-config-volume") pod "coredns-787d4945fb-jvdr8" (UID: "9d5d58d3-36c6-44d2-bf2d-2297c435af12") : failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: E0128 04:08:10.260802    7772 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: E0128 04:08:10.261075    7772 projected.go:198] Error preparing data for projected volume kube-api-access-98mds for pod kube-system/coredns-787d4945fb-jvdr8: failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: E0128 04:08:10.261357    7772 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: E0128 04:08:10.261404    7772 projected.go:198] Error preparing data for projected volume kube-api-access-jqpw9 for pod kube-system/kube-proxy-2vxmw: failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: E0128 04:08:10.261575    7772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d5d58d3-36c6-44d2-bf2d-2297c435af12-kube-api-access-98mds podName:9d5d58d3-36c6-44d2-bf2d-2297c435af12 nodeName:}" failed. No retries permitted until 2023-01-28 04:08:10.761365028 +0000 UTC m=+7.137491974 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-98mds" (UniqueName: "kubernetes.io/projected/9d5d58d3-36c6-44d2-bf2d-2297c435af12-kube-api-access-98mds") pod "coredns-787d4945fb-jvdr8" (UID: "9d5d58d3-36c6-44d2-bf2d-2297c435af12") : failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: E0128 04:08:10.261714    7772 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0971d3d-f13f-421d-a7db-fa18ee862abb-kube-api-access-jqpw9 podName:f0971d3d-f13f-421d-a7db-fa18ee862abb nodeName:}" failed. No retries permitted until 2023-01-28 04:08:10.76169875 +0000 UTC m=+7.137825700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jqpw9" (UniqueName: "kubernetes.io/projected/f0971d3d-f13f-421d-a7db-fa18ee862abb-kube-api-access-jqpw9") pod "kube-proxy-2vxmw" (UID: "f0971d3d-f13f-421d-a7db-fa18ee862abb") : failed to sync configmap cache: timed out waiting for the condition
	Jan 28 04:08:10 pause-539738 kubelet[7772]: I0128 04:08:10.915515    7772 scope.go:115] "RemoveContainer" containerID="689f2394c859575dfc2364323aeed3082f6bf6a03c02a86bfebf5893ace7b193"
	Jan 28 04:08:11 pause-539738 kubelet[7772]: I0128 04:08:11.702653    7772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16c544267dffcb63c0d09b5e96b77c5b7d4df254822a006bcc4ebbcbeb321c0f"
	Jan 28 04:08:13 pause-539738 kubelet[7772]: I0128 04:08:13.739420    7772 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jan 28 04:08:15 pause-539738 kubelet[7772]: I0128 04:08:15.112792    7772 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Jan 28 04:08:22 pause-539738 kubelet[7772]: I0128 04:08:22.531025    7772 topology_manager.go:210] "Topology Admit Handler"
	Jan 28 04:08:22 pause-539738 kubelet[7772]: I0128 04:08:22.599180    7772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/28af396f-4ec7-455c-afe3-469c018c0197-tmp\") pod \"storage-provisioner\" (UID: \"28af396f-4ec7-455c-afe3-469c018c0197\") " pod="kube-system/storage-provisioner"
	Jan 28 04:08:22 pause-539738 kubelet[7772]: I0128 04:08:22.599270    7772 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbmcm\" (UniqueName: \"kubernetes.io/projected/28af396f-4ec7-455c-afe3-469c018c0197-kube-api-access-tbmcm\") pod \"storage-provisioner\" (UID: \"28af396f-4ec7-455c-afe3-469c018c0197\") " pod="kube-system/storage-provisioner"
	Jan 28 04:08:23 pause-539738 kubelet[7772]: I0128 04:08:23.885601    7772 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.885510354 pod.CreationTimestamp="2023-01-28 04:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-28 04:08:23.883976405 +0000 UTC m=+20.260103372" watchObservedRunningTime="2023-01-28 04:08:23.885510354 +0000 UTC m=+20.261637321"
	
	* 
	* ==> storage-provisioner [134943d77a41] <==
	* I0128 04:08:23.968265       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0128 04:08:23.999481       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0128 04:08:24.000206       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0128 04:08:24.017061       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0128 04:08:24.019421       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-539738_83055290-6640-4e7a-8a08-35a811fa0d82!
	I0128 04:08:24.020588       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"afe395e0-edf0-49ca-b725-64635464d2ad", APIVersion:"v1", ResourceVersion:"522", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-539738_83055290-6640-4e7a-8a08-35a811fa0d82 became leader
	I0128 04:08:24.121244       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-539738_83055290-6640-4e7a-8a08-35a811fa0d82!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-539738 -n pause-539738
helpers_test.go:261: (dbg) Run:  kubectl --context pause-539738 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (96.50s)

                                                
                                    

Test pass (269/300)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.74
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.26.1/json-events 3.58
11 TestDownloadOnly/v1.26.1/preload-exists 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.17
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
19 TestBinaryMirror 0.58
20 TestOffline 86.88
22 TestAddons/Setup 146.94
24 TestAddons/parallel/Registry 28.38
25 TestAddons/parallel/Ingress 23.85
26 TestAddons/parallel/MetricsServer 5.58
27 TestAddons/parallel/HelmTiller 27.25
29 TestAddons/parallel/CSI 57.42
30 TestAddons/parallel/Headlamp 26.2
31 TestAddons/parallel/CloudSpanner 5.46
34 TestAddons/serial/GCPAuth/Namespaces 0.13
35 TestAddons/StoppedEnableDisable 13.33
36 TestCertOptions 89.67
37 TestCertExpiration 361.43
38 TestDockerFlags 87.85
39 TestForceSystemdFlag 56.6
40 TestForceSystemdEnv 80.06
41 TestKVMDriverInstallOrUpdate 17.6
46 TestErrorSpam/start 0.4
47 TestErrorSpam/status 0.8
48 TestErrorSpam/pause 1.21
49 TestErrorSpam/unpause 1.32
50 TestErrorSpam/stop 12.56
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 115.54
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 41.83
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.1
61 TestFunctional/serial/CacheCmd/cache/add_remote 2.66
62 TestFunctional/serial/CacheCmd/cache/add_local 1.27
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
64 TestFunctional/serial/CacheCmd/cache/list 0.07
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
66 TestFunctional/serial/CacheCmd/cache/cache_reload 1.47
67 TestFunctional/serial/CacheCmd/cache/delete 0.13
68 TestFunctional/serial/MinikubeKubectlCmd 0.13
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
70 TestFunctional/serial/ExtraConfig 58.49
71 TestFunctional/serial/ComponentHealth 0.07
72 TestFunctional/serial/LogsCmd 1.12
73 TestFunctional/serial/LogsFileCmd 1.08
75 TestFunctional/parallel/ConfigCmd 0.48
76 TestFunctional/parallel/DashboardCmd 29.46
77 TestFunctional/parallel/DryRun 0.35
78 TestFunctional/parallel/InternationalLanguage 0.18
79 TestFunctional/parallel/StatusCmd 1.09
82 TestFunctional/parallel/ServiceCmd 15.03
83 TestFunctional/parallel/ServiceCmdConnect 8.6
84 TestFunctional/parallel/AddonsCmd 0.18
85 TestFunctional/parallel/PersistentVolumeClaim 41.02
87 TestFunctional/parallel/SSHCmd 0.49
88 TestFunctional/parallel/CpCmd 1.05
89 TestFunctional/parallel/MySQL 31.17
90 TestFunctional/parallel/FileSync 0.28
91 TestFunctional/parallel/CertSync 1.58
95 TestFunctional/parallel/NodeLabels 0.16
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.24
99 TestFunctional/parallel/License 0.18
100 TestFunctional/parallel/DockerEnv/bash 1.02
101 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
102 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
103 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
105 TestFunctional/parallel/MountCmd/any-port 24.79
106 TestFunctional/parallel/ProfileCmd/profile_list 0.43
107 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
108 TestFunctional/parallel/Version/short 0.08
109 TestFunctional/parallel/Version/components 0.71
110 TestFunctional/parallel/MountCmd/specific-port 1.84
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
123 TestFunctional/parallel/ImageCommands/ImageBuild 2.43
124 TestFunctional/parallel/ImageCommands/Setup 0.75
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.18
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.44
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.15
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.28
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.63
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.68
132 TestFunctional/delete_addon-resizer_images 0.08
133 TestFunctional/delete_my-image_image 0.02
134 TestFunctional/delete_minikube_cached_images 0.02
135 TestGvisorAddon 299.31
138 TestImageBuild/serial/NormalBuild 2.23
139 TestImageBuild/serial/BuildWithBuildArg 1.49
140 TestImageBuild/serial/BuildWithDockerIgnore 0.48
141 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.36
144 TestIngressAddonLegacy/StartLegacyK8sCluster 80.2
146 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.2
147 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.42
148 TestIngressAddonLegacy/serial/ValidateIngressAddons 35.21
151 TestJSONOutput/start/Command 68.8
152 TestJSONOutput/start/Audit 0
154 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/pause/Command 0.6
158 TestJSONOutput/pause/Audit 0
160 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/unpause/Command 0.56
164 TestJSONOutput/unpause/Audit 0
166 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/stop/Command 8.11
170 TestJSONOutput/stop/Audit 0
172 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
174 TestErrorJSONOutput 0.26
179 TestMainNoArgs 0.07
180 TestMinikubeProfile 109.29
183 TestMountStart/serial/StartWithMountFirst 27.78
184 TestMountStart/serial/VerifyMountFirst 0.42
185 TestMountStart/serial/StartWithMountSecond 27.92
186 TestMountStart/serial/VerifyMountSecond 0.42
187 TestMountStart/serial/DeleteFirst 0.86
188 TestMountStart/serial/VerifyMountPostDelete 0.42
189 TestMountStart/serial/Stop 2.24
190 TestMountStart/serial/RestartStopped 22.96
191 TestMountStart/serial/VerifyMountPostStop 0.42
194 TestMultiNode/serial/FreshStart2Nodes 131.71
195 TestMultiNode/serial/DeployApp2Nodes 4.33
196 TestMultiNode/serial/PingHostFrom2Pods 0.94
197 TestMultiNode/serial/AddNode 53.65
198 TestMultiNode/serial/ProfileList 0.23
199 TestMultiNode/serial/CopyFile 7.9
200 TestMultiNode/serial/StopNode 3.32
201 TestMultiNode/serial/StartAfterStop 29.98
202 TestMultiNode/serial/RestartKeepsNodes 159.88
203 TestMultiNode/serial/DeleteNode 1.76
204 TestMultiNode/serial/StopMultiNode 25.55
205 TestMultiNode/serial/RestartMultiNode 102.29
206 TestMultiNode/serial/ValidateNameConflict 55.93
211 TestPreload 162.58
213 TestScheduledStopUnix 126.14
214 TestSkaffold 84.47
217 TestRunningBinaryUpgrade 172.76
219 TestKubernetesUpgrade 183.44
223 TestPause/serial/Start 150.73
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
243 TestNoKubernetes/serial/StartWithK8s 107.97
244 TestNoKubernetes/serial/StartWithStopK8s 30.99
245 TestNoKubernetes/serial/Start 27.62
247 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
248 TestNoKubernetes/serial/ProfileList 20.07
249 TestStoppedBinaryUpgrade/Setup 0.26
250 TestStoppedBinaryUpgrade/Upgrade 198.37
251 TestNoKubernetes/serial/Stop 2.2
252 TestNoKubernetes/serial/StartNoArgs 23.92
253 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
254 TestStoppedBinaryUpgrade/MinikubeLogs 1.03
255 TestNetworkPlugins/group/auto/Start 108.93
256 TestNetworkPlugins/group/kindnet/Start 106.49
257 TestNetworkPlugins/group/calico/Start 108.1
258 TestNetworkPlugins/group/auto/KubeletFlags 0.27
259 TestNetworkPlugins/group/auto/NetCatPod 13.41
260 TestNetworkPlugins/group/auto/DNS 0.17
261 TestNetworkPlugins/group/auto/Localhost 0.14
262 TestNetworkPlugins/group/auto/HairPin 0.16
263 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
264 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
265 TestNetworkPlugins/group/kindnet/NetCatPod 15.42
266 TestNetworkPlugins/group/custom-flannel/Start 89.31
267 TestNetworkPlugins/group/kindnet/DNS 0.18
268 TestNetworkPlugins/group/kindnet/Localhost 0.15
269 TestNetworkPlugins/group/kindnet/HairPin 0.14
270 TestNetworkPlugins/group/false/Start 85.41
271 TestNetworkPlugins/group/enable-default-cni/Start 109.64
272 TestNetworkPlugins/group/calico/ControllerPod 5.02
273 TestNetworkPlugins/group/calico/KubeletFlags 0.28
274 TestNetworkPlugins/group/calico/NetCatPod 14.54
275 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
276 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.37
277 TestNetworkPlugins/group/calico/DNS 0.22
278 TestNetworkPlugins/group/calico/Localhost 0.38
279 TestNetworkPlugins/group/calico/HairPin 0.16
280 TestNetworkPlugins/group/custom-flannel/DNS 0.24
281 TestNetworkPlugins/group/custom-flannel/Localhost 0.28
282 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
283 TestNetworkPlugins/group/false/KubeletFlags 0.26
284 TestNetworkPlugins/group/false/NetCatPod 13.44
285 TestNetworkPlugins/group/flannel/Start 83.79
286 TestNetworkPlugins/group/bridge/Start 104.98
287 TestNetworkPlugins/group/false/DNS 0.17
288 TestNetworkPlugins/group/false/Localhost 0.16
289 TestNetworkPlugins/group/false/HairPin 0.18
290 TestNetworkPlugins/group/kubenet/Start 113.12
291 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
292 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.47
293 TestNetworkPlugins/group/enable-default-cni/DNS 0.29
294 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
295 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
296 TestNetworkPlugins/group/flannel/ControllerPod 5.02
297 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
298 TestNetworkPlugins/group/flannel/NetCatPod 12.39
300 TestStartStop/group/old-k8s-version/serial/FirstStart 149.52
301 TestNetworkPlugins/group/flannel/DNS 0.34
302 TestNetworkPlugins/group/flannel/Localhost 0.29
303 TestNetworkPlugins/group/flannel/HairPin 0.32
304 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
305 TestNetworkPlugins/group/bridge/NetCatPod 12.45
307 TestStartStop/group/no-preload/serial/FirstStart 126.71
308 TestNetworkPlugins/group/bridge/DNS 0.19
309 TestNetworkPlugins/group/bridge/Localhost 0.16
310 TestNetworkPlugins/group/bridge/HairPin 0.19
311 TestNetworkPlugins/group/kubenet/KubeletFlags 0.29
312 TestNetworkPlugins/group/kubenet/NetCatPod 12.46
314 TestStartStop/group/embed-certs/serial/FirstStart 86.67
315 TestNetworkPlugins/group/kubenet/DNS 0.19
316 TestNetworkPlugins/group/kubenet/Localhost 0.15
317 TestNetworkPlugins/group/kubenet/HairPin 0.18
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.64
320 TestStartStop/group/embed-certs/serial/DeployApp 8.5
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.83
322 TestStartStop/group/embed-certs/serial/Stop 13.15
323 TestStartStop/group/no-preload/serial/DeployApp 8.57
324 TestStartStop/group/old-k8s-version/serial/DeployApp 9.56
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
326 TestStartStop/group/embed-certs/serial/SecondStart 309.8
327 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
328 TestStartStop/group/no-preload/serial/Stop 13.14
329 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.88
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.49
331 TestStartStop/group/old-k8s-version/serial/Stop 13.17
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.85
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.13
334 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
335 TestStartStop/group/no-preload/serial/SecondStart 308.56
336 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
337 TestStartStop/group/old-k8s-version/serial/SecondStart 98.96
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 357.19
340 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
341 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
342 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
343 TestStartStop/group/old-k8s-version/serial/Pause 2.49
345 TestStartStop/group/newest-cni/serial/FirstStart 74.56
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.93
348 TestStartStop/group/newest-cni/serial/Stop 8.13
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
350 TestStartStop/group/newest-cni/serial/SecondStart 46.79
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
354 TestStartStop/group/newest-cni/serial/Pause 2.43
355 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
356 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
357 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
358 TestStartStop/group/embed-certs/serial/Pause 2.56
359 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
360 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
361 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
362 TestStartStop/group/no-preload/serial/Pause 2.44
363 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
364 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
365 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
366 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.36
x
+
TestDownloadOnly/v1.16.0/json-events (6.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-440974 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-440974 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (6.744099542s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-440974
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-440974: exit status 85 (80.969668ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-440974 | jenkins | v1.28.0 | 28 Jan 23 03:30 UTC |          |
	|         | -p download-only-440974        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 03:30:48
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 03:30:48.653572   11074 out.go:296] Setting OutFile to fd 1 ...
	I0128 03:30:48.653669   11074 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 03:30:48.653679   11074 out.go:309] Setting ErrFile to fd 2...
	I0128 03:30:48.653688   11074 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 03:30:48.653778   11074 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3903/.minikube/bin
	W0128 03:30:48.653876   11074 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15565-3903/.minikube/config/config.json: open /home/jenkins/minikube-integration/15565-3903/.minikube/config/config.json: no such file or directory
	I0128 03:30:48.654340   11074 out.go:303] Setting JSON to true
	I0128 03:30:48.655048   11074 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":800,"bootTime":1674875849,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0128 03:30:48.655110   11074 start.go:135] virtualization: kvm guest
	I0128 03:30:48.657363   11074 out.go:97] [download-only-440974] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0128 03:30:48.657442   11074 notify.go:220] Checking for updates...
	I0128 03:30:48.658817   11074 out.go:169] MINIKUBE_LOCATION=15565
	W0128 03:30:48.657448   11074 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15565-3903/.minikube/cache/preloaded-tarball: no such file or directory
	I0128 03:30:48.661440   11074 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 03:30:48.662823   11074 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
	I0128 03:30:48.664248   11074 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
	I0128 03:30:48.665599   11074 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0128 03:30:48.668195   11074 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0128 03:30:48.668358   11074 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 03:30:48.772007   11074 out.go:97] Using the kvm2 driver based on user configuration
	I0128 03:30:48.772034   11074 start.go:296] selected driver: kvm2
	I0128 03:30:48.772047   11074 start.go:840] validating driver "kvm2" against <nil>
	I0128 03:30:48.772341   11074 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 03:30:48.772481   11074 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15565-3903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0128 03:30:48.786333   11074 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.28.0
	I0128 03:30:48.786375   11074 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 03:30:48.786956   11074 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I0128 03:30:48.787084   11074 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0128 03:30:48.787117   11074 cni.go:84] Creating CNI manager for ""
	I0128 03:30:48.787131   11074 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 03:30:48.787139   11074 start_flags.go:319] config:
	{Name:download-only-440974 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-440974 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0128 03:30:48.787310   11074 iso.go:125] acquiring lock: {Name:mkae097b889f6bf43a43f260cc80a114303c04bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 03:30:48.789056   11074 out.go:97] Downloading VM boot image ...
	I0128 03:30:48.789088   11074 download.go:101] Downloading: https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso.sha256 -> /home/jenkins/minikube-integration/15565-3903/.minikube/cache/iso/amd64/minikube-v1.29.0-1674856271-15565-amd64.iso
	I0128 03:30:51.697024   11074 out.go:97] Starting control plane node download-only-440974 in cluster download-only-440974
	I0128 03:30:51.697046   11074 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 03:30:51.729270   11074 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0128 03:30:51.729300   11074 cache.go:57] Caching tarball of preloaded images
	I0128 03:30:51.729428   11074 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 03:30:51.731119   11074 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0128 03:30:51.731135   11074 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0128 03:30:51.753465   11074 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/15565-3903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-440974"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (3.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-440974 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-440974 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=kvm2 : (3.578636542s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (3.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-440974
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-440974: exit status 85 (82.017106ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-440974 | jenkins | v1.28.0 | 28 Jan 23 03:30 UTC |          |
	|         | -p download-only-440974        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-440974 | jenkins | v1.28.0 | 28 Jan 23 03:30 UTC |          |
	|         | -p download-only-440974        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 03:30:55
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.19.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 03:30:55.478219   11110 out.go:296] Setting OutFile to fd 1 ...
	I0128 03:30:55.478546   11110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 03:30:55.478557   11110 out.go:309] Setting ErrFile to fd 2...
	I0128 03:30:55.478563   11110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 03:30:55.478715   11110 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3903/.minikube/bin
	W0128 03:30:55.478867   11110 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15565-3903/.minikube/config/config.json: open /home/jenkins/minikube-integration/15565-3903/.minikube/config/config.json: no such file or directory
	I0128 03:30:55.479262   11110 out.go:303] Setting JSON to true
	I0128 03:30:55.479995   11110 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":807,"bootTime":1674875849,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0128 03:30:55.480047   11110 start.go:135] virtualization: kvm guest
	I0128 03:30:55.482202   11110 out.go:97] [download-only-440974] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0128 03:30:55.483727   11110 out.go:169] MINIKUBE_LOCATION=15565
	I0128 03:30:55.482325   11110 notify.go:220] Checking for updates...
	I0128 03:30:55.486660   11110 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 03:30:55.488123   11110 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
	I0128 03:30:55.489583   11110 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
	I0128 03:30:55.490984   11110 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-440974"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-440974
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-134357 --alsologtostderr --binary-mirror http://127.0.0.1:35047 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-134357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-134357
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (86.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-466600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-466600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m25.31462611s)
helpers_test.go:175: Cleaning up "offline-docker-466600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-466600
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-466600: (1.563567059s)
--- PASS: TestOffline (86.88s)

                                                
                                    
x
+
TestAddons/Setup (146.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-722117 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-722117 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.940570134s)
--- PASS: TestAddons/Setup (146.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (28.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 17.177323ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:344: "registry-xh7ln" [6da133cb-2e3a-40f5-b6dd-bfeddc0915b4] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019588419s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-g9cwd" [b8ffdbcc-436a-4984-97a9-8d73cd38edaf] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010040963s
addons_test.go:305: (dbg) Run:  kubectl --context addons-722117 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-722117 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:310: (dbg) Done: kubectl --context addons-722117 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (17.685077508s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 ip
2023/01/28 03:33:55 [DEBUG] GET http://192.168.39.125:5000
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (28.38s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-722117 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:197: (dbg) Run:  kubectl --context addons-722117 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-722117 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7092ad42-e0b8-41b0-a655-dbae82e151dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:344: "nginx" [7092ad42-e0b8-41b0-a655-dbae82e151dd] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.008662334s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-722117 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.39.125
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p addons-722117 addons disable ingress-dns --alsologtostderr -v=1: (2.090222362s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-722117 addons disable ingress --alsologtostderr -v=1: (7.607446294s)
--- PASS: TestAddons/parallel/Ingress (23.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 17.215033ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:344: "metrics-server-5f8fcc9bb7-j8z2x" [9eed3c2f-3abd-4463-8776-e0339c35bd5d] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01790413s
addons_test.go:380: (dbg) Run:  kubectl --context addons-722117 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.58s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (27.25s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 3.850301ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-q5596" [510f0c45-2a7b-4ff9-9e95-9b65e210d255] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00897599s
addons_test.go:438: (dbg) Run:  kubectl --context addons-722117 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:438: (dbg) Done: kubectl --context addons-722117 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (21.588960121s)
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (27.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 30.100836ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-722117 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-722117 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [af911ea5-dff6-4e1f-a616-b625704ea1e4] Pending
helpers_test.go:344: "task-pv-pod" [af911ea5-dff6-4e1f-a616-b625704ea1e4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod" [af911ea5-dff6-4e1f-a616-b625704ea1e4] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 30.018366379s
addons_test.go:549: (dbg) Run:  kubectl --context addons-722117 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-722117 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:419: (dbg) Run:  kubectl --context addons-722117 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-722117 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-722117 delete pod task-pv-pod: (1.124902411s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-722117 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-722117 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-722117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-722117 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8c02b1f3-24c7-4ccd-badb-6fbbf7b97dee] Pending
helpers_test.go:344: "task-pv-pod-restore" [8c02b1f3-24c7-4ccd-badb-6fbbf7b97dee] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod-restore" [8c02b1f3-24c7-4ccd-badb-6fbbf7b97dee] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.010730072s
addons_test.go:591: (dbg) Run:  kubectl --context addons-722117 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-722117 delete pod task-pv-pod-restore: (1.303551428s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-722117 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-722117 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-722117 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.851300505s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-722117 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.42s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (26.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-722117 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-722117 --alsologtostderr -v=1: (1.186557437s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-xdgwv" [20df58b8-279e-4896-b0ac-90c52c8f0e30] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:344: "headlamp-5759877c79-xdgwv" [20df58b8-279e-4896-b0ac-90c52c8f0e30] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:344: "headlamp-5759877c79-xdgwv" [20df58b8-279e-4896-b0ac-90c52c8f0e30] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 25.010603248s
--- PASS: TestAddons/parallel/Headlamp (26.20s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5dcf58dbbb-cbfkt" [906ff15d-894c-440b-983f-37808d4e6dc2] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006694211s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-722117
--- PASS: TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-722117 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-722117 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-722117
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-722117: (13.117206182s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-722117
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-722117
--- PASS: TestAddons/StoppedEnableDisable (13.33s)

                                                
                                    
x
+
TestCertOptions (89.67s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-458715 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-458715 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m28.090518195s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-458715 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-458715 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-458715 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-458715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-458715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-458715: (1.074179039s)
--- PASS: TestCertOptions (89.67s)

                                                
                                    
x
+
TestCertExpiration (361.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-013418 --memory=2048 --cert-expiration=3m --driver=kvm2 
E0128 04:09:15.103424   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:09:20.224176   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:09:30.464931   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-013418 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m57.393141029s)
E0128 04:11:30.137239   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-013418 --memory=2048 --cert-expiration=8760h --driver=kvm2 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-013418 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (1m2.907370185s)
helpers_test.go:175: Cleaning up "cert-expiration-013418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-013418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-013418: (1.127540249s)
--- PASS: TestCertExpiration (361.43s)

                                                
                                    
x
+
TestDockerFlags (87.85s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-461966 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-461966 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m26.331538344s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-461966 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-461966 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-461966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-461966
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-461966: (1.03926916s)
--- PASS: TestDockerFlags (87.85s)

                                                
                                    
x
+
TestForceSystemdFlag (56.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-746602 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
E0128 04:08:19.675782   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-746602 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (55.250167752s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-746602 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-746602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-746602
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-746602: (1.067251135s)
--- PASS: TestForceSystemdFlag (56.60s)

                                                
                                    
x
+
TestForceSystemdEnv (80.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-905389 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-905389 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m18.490017292s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-905389 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-905389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-905389
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-905389: (1.276073402s)
--- PASS: TestForceSystemdEnv (80.06s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (17.6s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (17.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 pause
--- PASS: TestErrorSpam/pause (1.21s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 unpause
--- PASS: TestErrorSpam/unpause (1.32s)

                                                
                                    
x
+
TestErrorSpam/stop (12.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 stop: (12.37647856s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-430971 --log_dir /tmp/nospam-430971 stop
--- PASS: TestErrorSpam/stop (12.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15565-3903/.minikube/files/etc/test/nested/copy/11062/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (115.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-868781 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-868781 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m55.540470168s)
--- PASS: TestFunctional/serial/StartWithProxy (115.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-868781 --alsologtostderr -v=8
E0128 03:38:27.092368   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
E0128 03:38:27.098115   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
E0128 03:38:27.108369   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
E0128 03:38:27.128599   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
E0128 03:38:27.168806   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
E0128 03:38:27.249102   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
E0128 03:38:27.409382   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
E0128 03:38:27.729915   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
E0128 03:38:28.370805   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
E0128 03:38:29.651424   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
E0128 03:38:32.212370   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
E0128 03:38:37.333472   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-868781 --alsologtostderr -v=8: (41.827148278s)
functional_test.go:656: soft start took 41.827861589s for "functional-868781" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-868781 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-868781 /tmp/TestFunctionalserialCacheCmdcacheadd_local1102828685/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 cache add minikube-local-cache-test:functional-868781
functional_test.go:1082: (dbg) Done: out/minikube-linux-amd64 -p functional-868781 cache add minikube-local-cache-test:functional-868781: (1.034631092s)
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 cache delete minikube-local-cache-test:functional-868781
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-868781
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-868781 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (226.359019ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 cache reload
E0128 03:38:47.574062   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 kubectl -- --context functional-868781 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-868781 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (58.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-868781 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0128 03:39:08.054333   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-868781 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (58.485479783s)
functional_test.go:754: restart took 58.485612629s for "functional-868781" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (58.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-868781 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-868781 logs: (1.118059066s)
--- PASS: TestFunctional/serial/LogsCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 logs --file /tmp/TestFunctionalserialLogsFileCmd2792980135/001/logs.txt
E0128 03:39:49.014701   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-868781 logs --file /tmp/TestFunctionalserialLogsFileCmd2792980135/001/logs.txt: (1.082021473s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-868781 config get cpus: exit status 14 (79.738144ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-868781 config get cpus: exit status 14 (72.317059ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (29.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-868781 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-868781 --alsologtostderr -v=1] ...

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:508: unable to kill pid 15289: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (29.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-868781 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-868781 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (174.22313ms)

                                                
                                                
-- stdout --
	* [functional-868781] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 03:39:53.063783   15108 out.go:296] Setting OutFile to fd 1 ...
	I0128 03:39:53.063881   15108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 03:39:53.063889   15108 out.go:309] Setting ErrFile to fd 2...
	I0128 03:39:53.063893   15108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 03:39:53.063995   15108 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3903/.minikube/bin
	I0128 03:39:53.064450   15108 out.go:303] Setting JSON to false
	I0128 03:39:53.065406   15108 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1344,"bootTime":1674875849,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0128 03:39:53.065462   15108 start.go:135] virtualization: kvm guest
	I0128 03:39:53.068666   15108 out.go:177] * [functional-868781] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0128 03:39:53.070602   15108 notify.go:220] Checking for updates...
	I0128 03:39:53.071928   15108 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 03:39:53.073471   15108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 03:39:53.079614   15108 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
	I0128 03:39:53.081334   15108 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
	I0128 03:39:53.082786   15108 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0128 03:39:53.084354   15108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 03:39:53.086357   15108 config.go:180] Loaded profile config "functional-868781": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 03:39:53.086878   15108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 03:39:53.086937   15108 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 03:39:53.102421   15108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41805
	I0128 03:39:53.102812   15108 main.go:141] libmachine: () Calling .GetVersion
	I0128 03:39:53.103296   15108 main.go:141] libmachine: Using API Version  1
	I0128 03:39:53.103319   15108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 03:39:53.103664   15108 main.go:141] libmachine: () Calling .GetMachineName
	I0128 03:39:53.103836   15108 main.go:141] libmachine: (functional-868781) Calling .DriverName
	I0128 03:39:53.104015   15108 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 03:39:53.104379   15108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 03:39:53.104414   15108 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 03:39:53.119138   15108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I0128 03:39:53.119454   15108 main.go:141] libmachine: () Calling .GetVersion
	I0128 03:39:53.119905   15108 main.go:141] libmachine: Using API Version  1
	I0128 03:39:53.119935   15108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 03:39:53.120230   15108 main.go:141] libmachine: () Calling .GetMachineName
	I0128 03:39:53.120421   15108 main.go:141] libmachine: (functional-868781) Calling .DriverName
	I0128 03:39:53.150247   15108 out.go:177] * Using the kvm2 driver based on existing profile
	I0128 03:39:53.151749   15108 start.go:296] selected driver: kvm2
	I0128 03:39:53.151773   15108 start.go:840] validating driver "kvm2" against &{Name:functional-868781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.26.1 ClusterName:functional-868781 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.17 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:f
alse nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0128 03:39:53.151922   15108 start.go:851] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 03:39:53.154288   15108 out.go:177] 
	W0128 03:39:53.155806   15108 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0128 03:39:53.157142   15108 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-868781 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-868781 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-868781 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (175.938748ms)

                                                
                                                
-- stdout --
	* [functional-868781] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 03:39:53.410845   15211 out.go:296] Setting OutFile to fd 1 ...
	I0128 03:39:53.411012   15211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 03:39:53.411023   15211 out.go:309] Setting ErrFile to fd 2...
	I0128 03:39:53.411029   15211 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 03:39:53.411204   15211 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3903/.minikube/bin
	I0128 03:39:53.411791   15211 out.go:303] Setting JSON to false
	I0128 03:39:53.412676   15211 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1345,"bootTime":1674875849,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0128 03:39:53.412735   15211 start.go:135] virtualization: kvm guest
	I0128 03:39:53.415253   15211 out.go:177] * [functional-868781] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	I0128 03:39:53.416766   15211 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 03:39:53.416720   15211 notify.go:220] Checking for updates...
	I0128 03:39:53.419455   15211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 03:39:53.421177   15211 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
	I0128 03:39:53.422714   15211 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
	I0128 03:39:53.424274   15211 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0128 03:39:53.425903   15211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 03:39:53.427903   15211 config.go:180] Loaded profile config "functional-868781": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 03:39:53.428422   15211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 03:39:53.428506   15211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 03:39:53.444120   15211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0128 03:39:53.444471   15211 main.go:141] libmachine: () Calling .GetVersion
	I0128 03:39:53.444983   15211 main.go:141] libmachine: Using API Version  1
	I0128 03:39:53.445016   15211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 03:39:53.445321   15211 main.go:141] libmachine: () Calling .GetMachineName
	I0128 03:39:53.445513   15211 main.go:141] libmachine: (functional-868781) Calling .DriverName
	I0128 03:39:53.445690   15211 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 03:39:53.445957   15211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 03:39:53.445998   15211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 03:39:53.462064   15211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39533
	I0128 03:39:53.462410   15211 main.go:141] libmachine: () Calling .GetVersion
	I0128 03:39:53.462828   15211 main.go:141] libmachine: Using API Version  1
	I0128 03:39:53.462846   15211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 03:39:53.463167   15211 main.go:141] libmachine: () Calling .GetMachineName
	I0128 03:39:53.463370   15211 main.go:141] libmachine: (functional-868781) Calling .DriverName
	I0128 03:39:53.499895   15211 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0128 03:39:53.501317   15211 start.go:296] selected driver: kvm2
	I0128 03:39:53.501342   15211 start.go:840] validating driver "kvm2" against &{Name:functional-868781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.26.1 ClusterName:functional-868781 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.17 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:f
alse nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0128 03:39:53.501498   15211 start.go:851] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 03:39:53.504058   15211 out.go:177] 
	W0128 03:39:53.505596   15211 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0128 03:39:53.506931   15211 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-868781 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1439: (dbg) Run:  kubectl --context functional-868781 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:344: "hello-node-6fddd6858d-hwgkw" [73a29560-c737-467b-ab2a-b4ce82060493] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:344: "hello-node-6fddd6858d-hwgkw" [73a29560-c737-467b-ab2a-b4ce82060493] Running
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 13.029563027s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 service list
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 service --namespace=default --https --url hello-node
functional_test.go:1476: found endpoint: https://192.168.39.17:30236
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 service hello-node --url --format={{.IP}}
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 service hello-node --url
functional_test.go:1511: found endpoint for hello-node: http://192.168.39.17:30236
--- PASS: TestFunctional/parallel/ServiceCmd (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-868781 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-868781 expose deployment hello-node-connect --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-bpnfd" [c2abce92-d674-4ddc-ade8-7fe2716c10f1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-bpnfd" [c2abce92-d674-4ddc-ade8-7fe2716c10f1] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008625487s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 service hello-node-connect --url
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.39.17:31171
functional_test.go:1605: http://192.168.39.17:31171: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-bpnfd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.17:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.17:31171
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [985523bf-f7cd-4a5b-a25b-2ed318fb1105] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007369122s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-868781 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-868781 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-868781 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-868781 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-868781 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9636f624-464e-443a-9ce8-74feda2e0acb] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [9636f624-464e-443a-9ce8-74feda2e0acb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [9636f624-464e-443a-9ce8-74feda2e0acb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.01066624s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-868781 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-868781 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-868781 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fbdabc3c-6aaa-4293-aa3c-2de7ddc9bf0a] Pending
helpers_test.go:344: "sp-pod" [fbdabc3c-6aaa-4293-aa3c-2de7ddc9bf0a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [fbdabc3c-6aaa-4293-aa3c-2de7ddc9bf0a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.010523528s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-868781 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh -n functional-868781 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 cp functional-868781:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4292539271/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh -n functional-868781 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-868781 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-prq2d" [843ad8c9-688b-4955-80e5-9f31ea293170] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-prq2d" [843ad8c9-688b-4955-80e5-9f31ea293170] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.010300647s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-868781 exec mysql-888f84dd9-prq2d -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-868781 exec mysql-888f84dd9-prq2d -- mysql -ppassword -e "show databases;": exit status 1 (278.712203ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-868781 exec mysql-888f84dd9-prq2d -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-868781 exec mysql-888f84dd9-prq2d -- mysql -ppassword -e "show databases;": exit status 1 (208.14725ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-868781 exec mysql-888f84dd9-prq2d -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.17s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/11062/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "sudo cat /etc/test/nested/copy/11062/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/11062.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "sudo cat /etc/ssl/certs/11062.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/11062.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "sudo cat /usr/share/ca-certificates/11062.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/110622.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "sudo cat /etc/ssl/certs/110622.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/110622.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "sudo cat /usr/share/ca-certificates/110622.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-868781 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-868781 ssh "sudo systemctl is-active crio": exit status 1 (238.8573ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-868781 docker-env) && out/minikube-linux-amd64 status -p functional-868781"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-868781 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (24.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-868781 /tmp/TestFunctionalparallelMountCmdany-port1890985587/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1674877191931697340" to /tmp/TestFunctionalparallelMountCmdany-port1890985587/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1674877191931697340" to /tmp/TestFunctionalparallelMountCmdany-port1890985587/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1674877191931697340" to /tmp/TestFunctionalparallelMountCmdany-port1890985587/001/test-1674877191931697340
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-868781 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (278.173219ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 28 03:39 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 28 03:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 28 03:39 test-1674877191931697340
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh cat /mount-9p/test-1674877191931697340

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-868781 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [116467ce-6e8a-4f15-a06c-5ac7064c964f] Pending
helpers_test.go:344: "busybox-mount" [116467ce-6e8a-4f15-a06c-5ac7064c964f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [116467ce-6e8a-4f15-a06c-5ac7064c964f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [116467ce-6e8a-4f15-a06c-5ac7064c964f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 22.009502609s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-868781 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-868781 /tmp/TestFunctionalparallelMountCmdany-port1890985587/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (24.79s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1311: Took "358.257935ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "75.246472ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "262.571261ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1375: Took "73.945271ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-868781 /tmp/TestFunctionalparallelMountCmdspecific-port3914723300/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-868781 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (262.43226ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-868781 /tmp/TestFunctionalparallelMountCmdspecific-port3914723300/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-868781 /tmp/TestFunctionalparallelMountCmdspecific-port3914723300/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-868781 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-868781
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-868781
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-868781 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-868781 | ac2ad0d875df8 | 30B    |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/mysql                     | 5.7               | 9ec14ca3fec4d | 455MB  |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-868781 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-868781 image ls --format json:
[{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-868781"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63
ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"9ec14ca3fec4d86d989ea6ac3f66af44da0298438e1082b0f1682dba5c912fdd","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],
"size":"744000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"ac2ad0d875df8edd94d95828c8b03d2f9db350e4b8a7cf9609926204cfa60b05","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-868781"],"size":"30"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"6e38f40d628db
3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-868781 image ls --format yaml:
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: ac2ad0d875df8edd94d95828c8b03d2f9db350e4b8a7cf9609926204cfa60b05
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-868781
size: "30"
- id: 9ec14ca3fec4d86d989ea6ac3f66af44da0298438e1082b0f1682dba5c912fdd
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-868781
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-868781 ssh pgrep buildkitd: exit status 1 (239.880182ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image build -t localhost/my-image:functional-868781 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-868781 image build -t localhost/my-image:functional-868781 testdata/build: (1.949784173s)
functional_test.go:316: (dbg) Stdout: out/minikube-linux-amd64 -p functional-868781 image build -t localhost/my-image:functional-868781 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in a521707e6721
Removing intermediate container a521707e6721
---> 90945125e024
Step 3/3 : ADD content.txt /
---> db6d5acb3d3b
Successfully built db6d5acb3d3b
Successfully tagged localhost/my-image:functional-868781
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2023/01/28 03:40:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-868781
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image load --daemon gcr.io/google-containers/addon-resizer:functional-868781

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-868781 image load --daemon gcr.io/google-containers/addon-resizer:functional-868781: (3.947294663s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image load --daemon gcr.io/google-containers/addon-resizer:functional-868781

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-868781 image load --daemon gcr.io/google-containers/addon-resizer:functional-868781: (2.223747735s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-868781
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image load --daemon gcr.io/google-containers/addon-resizer:functional-868781
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-868781 image load --daemon gcr.io/google-containers/addon-resizer:functional-868781: (3.207922734s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image save gcr.io/google-containers/addon-resizer:functional-868781 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-868781 image save gcr.io/google-containers/addon-resizer:functional-868781 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar: (1.277347475s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image rm gcr.io/google-containers/addon-resizer:functional-868781
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-868781 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar: (1.411420131s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-868781
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-868781 image save --daemon gcr.io/google-containers/addon-resizer:functional-868781

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-868781 image save --daemon gcr.io/google-containers/addon-resizer:functional-868781: (2.636267048s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-868781
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.68s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-868781
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-868781
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-868781
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (299.31s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-252601 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-252601 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m5.92000063s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-252601 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0128 04:09:49.758791   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 04:09:50.945687   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-252601 cache add gcr.io/k8s-minikube/gvisor-addon:2: (21.659877174s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-252601 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-252601 addons enable gvisor: (3.187621683s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [ae270b6a-232c-4e40-964c-7bc4f3f8374a] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.017195551s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-252601 replace --force -f testdata/nginx-untrusted.yaml
gvisor_addon_test.go:78: (dbg) Run:  kubectl --context gvisor-252601 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:83: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:344: "nginx-untrusted" [aa0b1e13-b074-4567-8726-f72fd3b8009d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestGvisorAddon
helpers_test.go:344: "nginx-untrusted" [aa0b1e13-b074-4567-8726-f72fd3b8009d] Running
gvisor_addon_test.go:83: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 13.008536137s
gvisor_addon_test.go:86: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [cbc9dd47-90b9-42df-a6c3-49ba7b4c9d6e] Running
gvisor_addon_test.go:86: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.008090856s
gvisor_addon_test.go:91: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-252601
E0128 04:10:31.907302   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:91: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-252601: (1m32.405793125s)
gvisor_addon_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-252601 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-252601 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m16.332503334s)
gvisor_addon_test.go:100: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [ae270b6a-232c-4e40-964c-7bc4f3f8374a] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
gvisor_addon_test.go:100: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.017436255s
gvisor_addon_test.go:103: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:344: "nginx-untrusted" [aa0b1e13-b074-4567-8726-f72fd3b8009d] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0128 04:13:19.675532   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
gvisor_addon_test.go:103: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 5.007438558s
gvisor_addon_test.go:106: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [cbc9dd47-90b9-42df-a6c3-49ba7b4c9d6e] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0128 04:13:27.091686   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
gvisor_addon_test.go:106: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.005023737s
helpers_test.go:175: Cleaning up "gvisor-252601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-252601
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-252601: (1.281215226s)
--- PASS: TestGvisorAddon (299.31s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-755396
image_test.go:73: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-755396: (2.226151137s)
--- PASS: TestImageBuild/serial/NormalBuild (2.23s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.49s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-755396
image_test.go:94: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-755396: (1.487711644s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.49s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-755396
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.36s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-755396
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.36s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (80.2s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-161764 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-161764 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m20.204828315s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (80.20s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.2s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-161764 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-161764 addons enable ingress --alsologtostderr -v=5: (13.200027045s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.20s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-161764 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (35.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-161764 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0128 03:43:27.092573   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-161764 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.549239176s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-161764 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-161764 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1ebdaec6-9ea2-4e47-844c-3809495aeb4f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1ebdaec6-9ea2-4e47-844c-3809495aeb4f] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.011363768s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-161764 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-161764 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-161764 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.39.69
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-161764 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-161764 addons disable ingress-dns --alsologtostderr -v=1: (6.134242766s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-161764 addons disable ingress --alsologtostderr -v=1
E0128 03:43:54.775611   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-161764 addons disable ingress --alsologtostderr -v=1: (7.31279516s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (35.21s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-137268 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0128 03:44:49.758616   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 03:44:49.763882   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 03:44:49.774137   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 03:44:49.794426   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 03:44:49.834682   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 03:44:49.914969   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 03:44:50.075402   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 03:44:50.396037   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 03:44:51.036955   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 03:44:52.317400   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 03:44:54.879135   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 03:44:59.999704   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-137268 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m8.796627678s)
--- PASS: TestJSONOutput/start/Command (68.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-137268 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-137268 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-137268 --output=json --user=testUser
E0128 03:45:10.240179   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-137268 --output=json --user=testUser: (8.113591764s)
--- PASS: TestJSONOutput/stop/Command (8.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-223985 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-223985 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.630493ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e00665a1-47c0-453b-9c46-add720ec7690","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-223985] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"274af654-d9ed-44a4-9936-18b67a840d64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"fef1cb3a-11fa-48f0-b77d-3adb2b303683","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1929b934-98d5-42e3-ab0e-137a508ae3b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig"}}
	{"specversion":"1.0","id":"d78f5320-2c97-4afe-96f6-5f5d98546b3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube"}}
	{"specversion":"1.0","id":"ac16df82-265d-4854-9b26-2ffa2b07071c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d44fa6f3-53af-4879-9b9e-acc52ebc5177","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5a7ad45d-7554-456b-a812-8f01537db521","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-223985" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-223985
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (109.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-072476 --driver=kvm2 
E0128 03:45:30.720860   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-072476 --driver=kvm2 : (53.109525336s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-075045 --driver=kvm2 
E0128 03:46:11.681550   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-075045 --driver=kvm2 : (53.439158465s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-072476
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-075045
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-075045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-075045
helpers_test.go:175: Cleaning up "first-072476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-072476
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-072476: (1.000860557s)
--- PASS: TestMinikubeProfile (109.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-085467 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-085467 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (26.776285058s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-085467 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-085467 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-098291 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0128 03:47:33.602531   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-098291 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (26.920818231s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-098291 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-098291 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-085467 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-098291 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-098291 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-098291
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-098291: (2.238608091s)
--- PASS: TestMountStart/serial/Stop (2.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-098291
E0128 03:48:19.675640   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:48:19.680924   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:48:19.691187   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:48:19.711429   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:48:19.751664   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:48:19.832081   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:48:19.992481   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:48:20.313037   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:48:20.954161   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:48:22.234629   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:48:24.795482   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-098291: (21.964086336s)
E0128 03:48:27.091380   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (22.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-098291 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-098291 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-940074 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0128 03:48:29.916583   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:48:40.157739   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:49:00.638616   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:49:41.599409   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:49:49.759434   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 03:50:17.442774   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-940074 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m11.293418895s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-940074 -- rollout status deployment/busybox: (2.406149664s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- exec busybox-6b86dd6d48-5w5b8 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- exec busybox-6b86dd6d48-nbv6x -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- exec busybox-6b86dd6d48-5w5b8 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- exec busybox-6b86dd6d48-nbv6x -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- exec busybox-6b86dd6d48-5w5b8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- exec busybox-6b86dd6d48-nbv6x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- exec busybox-6b86dd6d48-5w5b8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- exec busybox-6b86dd6d48-5w5b8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- exec busybox-6b86dd6d48-nbv6x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-940074 -- exec busybox-6b86dd6d48-nbv6x -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-940074 -v 3 --alsologtostderr
E0128 03:51:03.521398   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-940074 -v 3 --alsologtostderr: (53.063163727s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.65s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 cp testdata/cp-test.txt multinode-940074:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 cp multinode-940074:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2986739632/001/cp-test_multinode-940074.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 cp multinode-940074:/home/docker/cp-test.txt multinode-940074-m02:/home/docker/cp-test_multinode-940074_multinode-940074-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074-m02 "sudo cat /home/docker/cp-test_multinode-940074_multinode-940074-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 cp multinode-940074:/home/docker/cp-test.txt multinode-940074-m03:/home/docker/cp-test_multinode-940074_multinode-940074-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074-m03 "sudo cat /home/docker/cp-test_multinode-940074_multinode-940074-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 cp testdata/cp-test.txt multinode-940074-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 cp multinode-940074-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2986739632/001/cp-test_multinode-940074-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 cp multinode-940074-m02:/home/docker/cp-test.txt multinode-940074:/home/docker/cp-test_multinode-940074-m02_multinode-940074.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074 "sudo cat /home/docker/cp-test_multinode-940074-m02_multinode-940074.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 cp multinode-940074-m02:/home/docker/cp-test.txt multinode-940074-m03:/home/docker/cp-test_multinode-940074-m02_multinode-940074-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074-m03 "sudo cat /home/docker/cp-test_multinode-940074-m02_multinode-940074-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 cp testdata/cp-test.txt multinode-940074-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 cp multinode-940074-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2986739632/001/cp-test_multinode-940074-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 cp multinode-940074-m03:/home/docker/cp-test.txt multinode-940074:/home/docker/cp-test_multinode-940074-m03_multinode-940074.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074 "sudo cat /home/docker/cp-test_multinode-940074-m03_multinode-940074.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 cp multinode-940074-m03:/home/docker/cp-test.txt multinode-940074-m02:/home/docker/cp-test_multinode-940074-m03_multinode-940074-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 ssh -n multinode-940074-m02 "sudo cat /home/docker/cp-test_multinode-940074-m03_multinode-940074-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-940074 node stop m03: (2.449607484s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-940074 status: exit status 7 (439.618253ms)

                                                
                                                
-- stdout --
	multinode-940074
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-940074-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-940074-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-940074 status --alsologtostderr: exit status 7 (434.467722ms)

                                                
                                                
-- stdout --
	multinode-940074
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-940074-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-940074-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 03:51:50.312967   22167 out.go:296] Setting OutFile to fd 1 ...
	I0128 03:51:50.313199   22167 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 03:51:50.313218   22167 out.go:309] Setting ErrFile to fd 2...
	I0128 03:51:50.313226   22167 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 03:51:50.313493   22167 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3903/.minikube/bin
	I0128 03:51:50.313763   22167 out.go:303] Setting JSON to false
	I0128 03:51:50.313793   22167 mustload.go:65] Loading cluster: multinode-940074
	I0128 03:51:50.314198   22167 notify.go:220] Checking for updates...
	I0128 03:51:50.314717   22167 config.go:180] Loaded profile config "multinode-940074": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 03:51:50.314746   22167 status.go:255] checking status of multinode-940074 ...
	I0128 03:51:50.315230   22167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 03:51:50.315283   22167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 03:51:50.332533   22167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0128 03:51:50.332935   22167 main.go:141] libmachine: () Calling .GetVersion
	I0128 03:51:50.333439   22167 main.go:141] libmachine: Using API Version  1
	I0128 03:51:50.333459   22167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 03:51:50.333863   22167 main.go:141] libmachine: () Calling .GetMachineName
	I0128 03:51:50.334044   22167 main.go:141] libmachine: (multinode-940074) Calling .GetState
	I0128 03:51:50.335500   22167 status.go:330] multinode-940074 host status = "Running" (err=<nil>)
	I0128 03:51:50.335514   22167 host.go:66] Checking if "multinode-940074" exists ...
	I0128 03:51:50.335802   22167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 03:51:50.335841   22167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 03:51:50.350645   22167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41429
	I0128 03:51:50.350963   22167 main.go:141] libmachine: () Calling .GetVersion
	I0128 03:51:50.351313   22167 main.go:141] libmachine: Using API Version  1
	I0128 03:51:50.351331   22167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 03:51:50.351683   22167 main.go:141] libmachine: () Calling .GetMachineName
	I0128 03:51:50.351871   22167 main.go:141] libmachine: (multinode-940074) Calling .GetIP
	I0128 03:51:50.354039   22167 main.go:141] libmachine: (multinode-940074) DBG | domain multinode-940074 has defined MAC address 52:54:00:14:e9:bd in network mk-multinode-940074
	I0128 03:51:50.354392   22167 main.go:141] libmachine: (multinode-940074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e9:bd", ip: ""} in network mk-multinode-940074: {Iface:virbr1 ExpiryTime:2023-01-28 04:48:43 +0000 UTC Type:0 Mac:52:54:00:14:e9:bd Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-940074 Clientid:01:52:54:00:14:e9:bd}
	I0128 03:51:50.354432   22167 main.go:141] libmachine: (multinode-940074) DBG | domain multinode-940074 has defined IP address 192.168.39.141 and MAC address 52:54:00:14:e9:bd in network mk-multinode-940074
	I0128 03:51:50.354558   22167 host.go:66] Checking if "multinode-940074" exists ...
	I0128 03:51:50.354819   22167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 03:51:50.354849   22167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 03:51:50.369622   22167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33047
	I0128 03:51:50.369916   22167 main.go:141] libmachine: () Calling .GetVersion
	I0128 03:51:50.370326   22167 main.go:141] libmachine: Using API Version  1
	I0128 03:51:50.370347   22167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 03:51:50.370610   22167 main.go:141] libmachine: () Calling .GetMachineName
	I0128 03:51:50.370767   22167 main.go:141] libmachine: (multinode-940074) Calling .DriverName
	I0128 03:51:50.370948   22167 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 03:51:50.370980   22167 main.go:141] libmachine: (multinode-940074) Calling .GetSSHHostname
	I0128 03:51:50.373380   22167 main.go:141] libmachine: (multinode-940074) DBG | domain multinode-940074 has defined MAC address 52:54:00:14:e9:bd in network mk-multinode-940074
	I0128 03:51:50.373763   22167 main.go:141] libmachine: (multinode-940074) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:e9:bd", ip: ""} in network mk-multinode-940074: {Iface:virbr1 ExpiryTime:2023-01-28 04:48:43 +0000 UTC Type:0 Mac:52:54:00:14:e9:bd Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-940074 Clientid:01:52:54:00:14:e9:bd}
	I0128 03:51:50.373799   22167 main.go:141] libmachine: (multinode-940074) DBG | domain multinode-940074 has defined IP address 192.168.39.141 and MAC address 52:54:00:14:e9:bd in network mk-multinode-940074
	I0128 03:51:50.373878   22167 main.go:141] libmachine: (multinode-940074) Calling .GetSSHPort
	I0128 03:51:50.374044   22167 main.go:141] libmachine: (multinode-940074) Calling .GetSSHKeyPath
	I0128 03:51:50.374187   22167 main.go:141] libmachine: (multinode-940074) Calling .GetSSHUsername
	I0128 03:51:50.374332   22167 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/multinode-940074/id_rsa Username:docker}
	I0128 03:51:50.461932   22167 ssh_runner.go:195] Run: systemctl --version
	I0128 03:51:50.466949   22167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 03:51:50.483267   22167 kubeconfig.go:92] found "multinode-940074" server: "https://192.168.39.141:8443"
	I0128 03:51:50.483288   22167 api_server.go:165] Checking apiserver status ...
	I0128 03:51:50.483308   22167 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 03:51:50.493905   22167 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1769/cgroup
	I0128 03:51:50.501008   22167 api_server.go:181] apiserver freezer: "7:freezer:/kubepods/burstable/podc9117bf6636acb71b7af83c73e290fb4/23bbe22718c5c999430f12414e278049a5769a5f4a6661eec67864d394472367"
	I0128 03:51:50.501044   22167 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podc9117bf6636acb71b7af83c73e290fb4/23bbe22718c5c999430f12414e278049a5769a5f4a6661eec67864d394472367/freezer.state
	I0128 03:51:50.508401   22167 api_server.go:203] freezer state: "THAWED"
	I0128 03:51:50.508419   22167 api_server.go:252] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I0128 03:51:50.512243   22167 api_server.go:278] https://192.168.39.141:8443/healthz returned 200:
	ok
	I0128 03:51:50.512260   22167 status.go:421] multinode-940074 apiserver status = Running (err=<nil>)
	I0128 03:51:50.512267   22167 status.go:257] multinode-940074 status: &{Name:multinode-940074 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0128 03:51:50.512282   22167 status.go:255] checking status of multinode-940074-m02 ...
	I0128 03:51:50.512564   22167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 03:51:50.512601   22167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 03:51:50.527197   22167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
	I0128 03:51:50.527605   22167 main.go:141] libmachine: () Calling .GetVersion
	I0128 03:51:50.528031   22167 main.go:141] libmachine: Using API Version  1
	I0128 03:51:50.528050   22167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 03:51:50.528309   22167 main.go:141] libmachine: () Calling .GetMachineName
	I0128 03:51:50.528479   22167 main.go:141] libmachine: (multinode-940074-m02) Calling .GetState
	I0128 03:51:50.529902   22167 status.go:330] multinode-940074-m02 host status = "Running" (err=<nil>)
	I0128 03:51:50.529925   22167 host.go:66] Checking if "multinode-940074-m02" exists ...
	I0128 03:51:50.530198   22167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 03:51:50.530232   22167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 03:51:50.544428   22167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39391
	I0128 03:51:50.544733   22167 main.go:141] libmachine: () Calling .GetVersion
	I0128 03:51:50.545116   22167 main.go:141] libmachine: Using API Version  1
	I0128 03:51:50.545138   22167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 03:51:50.545452   22167 main.go:141] libmachine: () Calling .GetMachineName
	I0128 03:51:50.545592   22167 main.go:141] libmachine: (multinode-940074-m02) Calling .GetIP
	I0128 03:51:50.547940   22167 main.go:141] libmachine: (multinode-940074-m02) DBG | domain multinode-940074-m02 has defined MAC address 52:54:00:c6:d0:96 in network mk-multinode-940074
	I0128 03:51:50.548319   22167 main.go:141] libmachine: (multinode-940074-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:96", ip: ""} in network mk-multinode-940074: {Iface:virbr1 ExpiryTime:2023-01-28 04:50:01 +0000 UTC Type:0 Mac:52:54:00:c6:d0:96 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-940074-m02 Clientid:01:52:54:00:c6:d0:96}
	I0128 03:51:50.548339   22167 main.go:141] libmachine: (multinode-940074-m02) DBG | domain multinode-940074-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:c6:d0:96 in network mk-multinode-940074
	I0128 03:51:50.548414   22167 host.go:66] Checking if "multinode-940074-m02" exists ...
	I0128 03:51:50.548682   22167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 03:51:50.548718   22167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 03:51:50.562994   22167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42221
	I0128 03:51:50.563296   22167 main.go:141] libmachine: () Calling .GetVersion
	I0128 03:51:50.563752   22167 main.go:141] libmachine: Using API Version  1
	I0128 03:51:50.563770   22167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 03:51:50.564037   22167 main.go:141] libmachine: () Calling .GetMachineName
	I0128 03:51:50.564203   22167 main.go:141] libmachine: (multinode-940074-m02) Calling .DriverName
	I0128 03:51:50.564354   22167 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 03:51:50.564369   22167 main.go:141] libmachine: (multinode-940074-m02) Calling .GetSSHHostname
	I0128 03:51:50.566797   22167 main.go:141] libmachine: (multinode-940074-m02) DBG | domain multinode-940074-m02 has defined MAC address 52:54:00:c6:d0:96 in network mk-multinode-940074
	I0128 03:51:50.567092   22167 main.go:141] libmachine: (multinode-940074-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:d0:96", ip: ""} in network mk-multinode-940074: {Iface:virbr1 ExpiryTime:2023-01-28 04:50:01 +0000 UTC Type:0 Mac:52:54:00:c6:d0:96 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-940074-m02 Clientid:01:52:54:00:c6:d0:96}
	I0128 03:51:50.567121   22167 main.go:141] libmachine: (multinode-940074-m02) DBG | domain multinode-940074-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:c6:d0:96 in network mk-multinode-940074
	I0128 03:51:50.567241   22167 main.go:141] libmachine: (multinode-940074-m02) Calling .GetSSHPort
	I0128 03:51:50.567447   22167 main.go:141] libmachine: (multinode-940074-m02) Calling .GetSSHKeyPath
	I0128 03:51:50.567606   22167 main.go:141] libmachine: (multinode-940074-m02) Calling .GetSSHUsername
	I0128 03:51:50.567749   22167 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15565-3903/.minikube/machines/multinode-940074-m02/id_rsa Username:docker}
	I0128 03:51:50.653540   22167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 03:51:50.665214   22167 status.go:257] multinode-940074-m02 status: &{Name:multinode-940074-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0128 03:51:50.665242   22167 status.go:255] checking status of multinode-940074-m03 ...
	I0128 03:51:50.665511   22167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 03:51:50.665557   22167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 03:51:50.679823   22167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0128 03:51:50.680216   22167 main.go:141] libmachine: () Calling .GetVersion
	I0128 03:51:50.680664   22167 main.go:141] libmachine: Using API Version  1
	I0128 03:51:50.680689   22167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 03:51:50.681040   22167 main.go:141] libmachine: () Calling .GetMachineName
	I0128 03:51:50.681255   22167 main.go:141] libmachine: (multinode-940074-m03) Calling .GetState
	I0128 03:51:50.682748   22167 status.go:330] multinode-940074-m03 host status = "Stopped" (err=<nil>)
	I0128 03:51:50.682763   22167 status.go:343] host is not running, skipping remaining checks
	I0128 03:51:50.682770   22167 status.go:257] multinode-940074-m03 status: &{Name:multinode-940074-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-940074 node start m03 --alsologtostderr: (29.35741841s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (159.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-940074
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-940074
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-940074: (27.67926818s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-940074 --wait=true -v=8 --alsologtostderr
E0128 03:53:19.675485   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:53:27.092361   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
E0128 03:53:47.361997   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:54:49.758849   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 03:54:50.136388   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-940074 --wait=true -v=8 --alsologtostderr: (2m12.061744208s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-940074
--- PASS: TestMultiNode/serial/RestartKeepsNodes (159.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-940074 node delete m03: (1.234077337s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-940074 stop: (25.347218942s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-940074 status: exit status 7 (100.762816ms)

                                                
                                                
-- stdout --
	multinode-940074
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-940074-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-940074 status --alsologtostderr: exit status 7 (102.13435ms)

                                                
                                                
-- stdout --
	multinode-940074
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-940074-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 03:55:27.819822   23126 out.go:296] Setting OutFile to fd 1 ...
	I0128 03:55:27.819920   23126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 03:55:27.819929   23126 out.go:309] Setting ErrFile to fd 2...
	I0128 03:55:27.819933   23126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 03:55:27.820028   23126 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3903/.minikube/bin
	I0128 03:55:27.820186   23126 out.go:303] Setting JSON to false
	I0128 03:55:27.820211   23126 mustload.go:65] Loading cluster: multinode-940074
	I0128 03:55:27.820235   23126 notify.go:220] Checking for updates...
	I0128 03:55:27.820645   23126 config.go:180] Loaded profile config "multinode-940074": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 03:55:27.820666   23126 status.go:255] checking status of multinode-940074 ...
	I0128 03:55:27.821008   23126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 03:55:27.821044   23126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 03:55:27.835525   23126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36003
	I0128 03:55:27.835989   23126 main.go:141] libmachine: () Calling .GetVersion
	I0128 03:55:27.836685   23126 main.go:141] libmachine: Using API Version  1
	I0128 03:55:27.836921   23126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 03:55:27.837323   23126 main.go:141] libmachine: () Calling .GetMachineName
	I0128 03:55:27.837530   23126 main.go:141] libmachine: (multinode-940074) Calling .GetState
	I0128 03:55:27.838994   23126 status.go:330] multinode-940074 host status = "Stopped" (err=<nil>)
	I0128 03:55:27.839010   23126 status.go:343] host is not running, skipping remaining checks
	I0128 03:55:27.839016   23126 status.go:257] multinode-940074 status: &{Name:multinode-940074 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0128 03:55:27.839034   23126 status.go:255] checking status of multinode-940074-m02 ...
	I0128 03:55:27.839302   23126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0128 03:55:27.839325   23126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0128 03:55:27.852990   23126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I0128 03:55:27.853307   23126 main.go:141] libmachine: () Calling .GetVersion
	I0128 03:55:27.853696   23126 main.go:141] libmachine: Using API Version  1
	I0128 03:55:27.853707   23126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0128 03:55:27.853980   23126 main.go:141] libmachine: () Calling .GetMachineName
	I0128 03:55:27.854142   23126 main.go:141] libmachine: (multinode-940074-m02) Calling .GetState
	I0128 03:55:27.855410   23126 status.go:330] multinode-940074-m02 host status = "Stopped" (err=<nil>)
	I0128 03:55:27.855424   23126 status.go:343] host is not running, skipping remaining checks
	I0128 03:55:27.855430   23126 status.go:257] multinode-940074-m02 status: &{Name:multinode-940074-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (102.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-940074 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-940074 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m41.731337426s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-940074 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (102.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (55.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-940074
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-940074-m02 --driver=kvm2 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-940074-m02 --driver=kvm2 : exit status 14 (89.072069ms)

                                                
                                                
-- stdout --
	* [multinode-940074-m02] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-940074-m02' is duplicated with machine name 'multinode-940074-m02' in profile 'multinode-940074'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-940074-m03 --driver=kvm2 
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-940074-m03 --driver=kvm2 : (54.554791086s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-940074
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-940074: exit status 80 (233.634804ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-940074
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-940074-m03 already exists in multinode-940074-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-940074-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (55.93s)

                                                
                                    
x
+
TestPreload (162.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-996563 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0128 03:58:19.675144   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 03:58:27.092208   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-996563 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m23.511289809s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-996563 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-996563
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-996563: (13.121335598s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-996563 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0128 03:59:49.758711   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-996563 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m3.839929153s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-996563 -- docker images
helpers_test.go:175: Cleaning up "test-preload-996563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-996563
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-996563: (1.044525549s)
--- PASS: TestPreload (162.58s)

                                                
                                    
x
+
TestScheduledStopUnix (126.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-179108 --memory=2048 --driver=kvm2 
E0128 04:01:12.805432   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-179108 --memory=2048 --driver=kvm2 : (54.321160725s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179108 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-179108 -n scheduled-stop-179108
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179108 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179108 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-179108 -n scheduled-stop-179108
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-179108
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-179108 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-179108
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-179108: exit status 7 (86.801341ms)

                                                
                                                
-- stdout --
	scheduled-stop-179108
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-179108 -n scheduled-stop-179108
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-179108 -n scheduled-stop-179108: exit status 7 (89.010154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-179108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-179108
--- PASS: TestScheduledStopUnix (126.14s)

                                                
                                    
x
+
TestSkaffold (84.47s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe4157552543 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-149845 --memory=2600 --driver=kvm2 
E0128 04:03:19.675586   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 04:03:27.092420   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-149845 --memory=2600 --driver=kvm2 : (53.452919555s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe4157552543 run --minikube-profile skaffold-149845 --kube-context skaffold-149845 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe4157552543 run --minikube-profile skaffold-149845 --kube-context skaffold-149845 --status-check=true --port-forward=false --interactive=false: (19.374033104s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6758887bb6-67nqz" [07901b5b-d45a-43e7-b487-b79af6e20a55] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.016577676s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-784d9cf9df-x45fp" [1427bcb2-86d6-4908-9cf0-c47ee68935e4] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.008268032s
helpers_test.go:175: Cleaning up "skaffold-149845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-149845
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-149845: (1.055375148s)
--- PASS: TestSkaffold (84.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (172.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.6.2.3945997153.exe start -p running-upgrade-482422 --memory=2200 --vm-driver=kvm2 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.6.2.3945997153.exe start -p running-upgrade-482422 --memory=2200 --vm-driver=kvm2 : (1m32.949953577s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-482422 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-482422 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m16.954428216s)
helpers_test.go:175: Cleaning up "running-upgrade-482422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-482422

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-482422: (2.551407117s)
--- PASS: TestRunningBinaryUpgrade (172.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (183.44s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-994986 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-994986 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m9.29724074s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-994986
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-994986: (12.13820727s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-994986 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-994986 status --format={{.Host}}: exit status 7 (106.628066ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-994986 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-994986 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2 : (51.376762108s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-994986 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-994986 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-994986 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (163.312391ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-994986] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-994986
	    minikube start -p kubernetes-upgrade-994986 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9949862 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-994986 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-994986 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-994986 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2 : (49.076494132s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-994986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-994986
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-994986: (1.218603225s)
--- PASS: TestKubernetesUpgrade (183.44s)

                                                
                                    
x
+
TestPause/serial/Start (150.73s)

                                                
                                                
=== RUN   TestPause/serial/Start

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-539738 --memory=2048 --install-addons=false --wait=all --driver=kvm2 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-539738 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (2m30.731782837s)
--- PASS: TestPause/serial/Start (150.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-398207 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-398207 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (89.251883ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-398207] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15565-3903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (107.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-398207 --driver=kvm2 
E0128 04:04:42.723687   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 04:04:49.759530   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-398207 --driver=kvm2 : (1m47.657011657s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-398207 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (107.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-398207 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-398207 --no-kubernetes --driver=kvm2 : (29.662856642s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-398207 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-398207 status -o json: exit status 2 (258.290502ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-398207","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-398207
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-398207: (1.067130217s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-398207 --no-kubernetes --driver=kvm2 

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-398207 --no-kubernetes --driver=kvm2 : (27.619172994s)
--- PASS: TestNoKubernetes/serial/Start (27.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-398207 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-398207 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.908199ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (20.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (19.36002714s)
--- PASS: TestNoKubernetes/serial/ProfileList (20.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (198.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.6.2.1651377154.exe start -p stopped-upgrade-426786 --memory=2200 --vm-driver=kvm2 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.6.2.1651377154.exe start -p stopped-upgrade-426786 --memory=2200 --vm-driver=kvm2 : (1m28.372694605s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.6.2.1651377154.exe -p stopped-upgrade-426786 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.6.2.1651377154.exe -p stopped-upgrade-426786 stop: (13.092579068s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-426786 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0128 04:09:09.983045   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:09:09.988322   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:09:09.998594   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:09:10.018839   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:09:10.059176   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:09:10.139510   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:09:10.300534   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:09:10.621288   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:09:11.262209   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:09:12.542884   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-426786 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m36.902053657s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (198.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-398207
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-398207: (2.197078683s)
--- PASS: TestNoKubernetes/serial/Stop (2.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-398207 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-398207 --driver=kvm2 : (23.920834066s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-398207 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-398207 "sudo systemctl is-active --quiet service kubelet": exit status 1 (306.659901ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-426786
version_upgrade_test.go:214: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-426786: (1.029329023s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (108.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0128 04:11:53.827983   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m48.932941389s)
--- PASS: TestNetworkPlugins/group/auto/Start (108.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (106.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m46.4915967s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (106.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (108.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m48.095022897s)
--- PASS: TestNetworkPlugins/group/calico/Start (108.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-877541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-877541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-w85bc" [b49545bc-3c6c-48b0-b295-77e869b3d0b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-w85bc" [b49545bc-3c6c-48b0-b295-77e869b3d0b2] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.007788355s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-877541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-jdnns" [c6504545-5082-4f59-b201-c91204ee04a9] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.018221982s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-877541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-877541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-lcbd4" [49ac29a2-611b-4494-ac6a-2a4d6d1f32d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:344: "netcat-694fc96674-lcbd4" [49ac29a2-611b-4494-ac6a-2a4d6d1f32d8] Running
E0128 04:14:09.983428   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.015282408s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (89.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m29.314173131s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (89.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-877541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (85.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p false-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E0128 04:14:37.668813   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:14:49.759543   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 04:15:00.118442   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
E0128 04:15:00.123794   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
E0128 04:15:00.134083   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
E0128 04:15:00.154922   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
E0128 04:15:00.195194   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
E0128 04:15:00.275519   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
E0128 04:15:00.436156   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
E0128 04:15:00.756564   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
E0128 04:15:01.397552   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
E0128 04:15:02.678730   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
E0128 04:15:05.239223   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
E0128 04:15:10.359935   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p false-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m25.412770034s)
--- PASS: TestNetworkPlugins/group/false/Start (85.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (109.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m49.642596706s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (109.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-65fs5" [59b6d5cd-8306-477e-a0e9-ae7503144ab6] Running
E0128 04:15:20.600514   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.023172286s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-877541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-877541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-tnwgs" [5d9f419a-3b5a-4929-beef-d07939ace1d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-tnwgs" [5d9f419a-3b5a-4929-beef-d07939ace1d2] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.011431446s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-877541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-877541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-5xw4x" [bac84531-2da4-433b-93d3-6150c1a9c503] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/NetCatPod
helpers_test.go:344: "netcat-694fc96674-5xw4x" [bac84531-2da4-433b-93d3-6150c1a9c503] Running
E0128 04:15:41.080671   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.019621259s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-877541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-877541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-877541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-877541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-rflmr" [884275de-e896-4f8b-ad7e-fd7095966b9c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:344: "netcat-694fc96674-rflmr" [884275de-e896-4f8b-ad7e-fd7095966b9c] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.007934116s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m23.785749152s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (104.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m44.975343825s)
--- PASS: TestNetworkPlugins/group/bridge/Start (104.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-877541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (113.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-877541 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m53.116660109s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (113.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-877541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-877541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-7ccns" [970b463c-75e7-4cbd-898f-44b641fa066f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-7ccns" [970b463c-75e7-4cbd-898f-44b641fa066f] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.015553034s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-877541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-24crg" [c1f83fd6-8c68-4ec8-b9a3-ff5c0bd0a0aa] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.020555766s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-877541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-877541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-xlgdt" [c90ab3e8-c6d7-4969-a7d6-5d600628436f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-xlgdt" [c90ab3e8-c6d7-4969-a7d6-5d600628436f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.020218808s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (149.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-883473 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-883473 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m29.523336556s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (149.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-877541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-877541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-877541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-g4jxm" [8443385b-1f69-4cfe-b3cc-8b1e1f82a993] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0128 04:17:52.805839   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-g4jxm" [8443385b-1f69-4cfe-b3cc-8b1e1f82a993] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.00872667s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (126.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-939398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-939398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.26.1: (2m6.712127591s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (126.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-877541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-877541 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-877541 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-jdvgp" [20e977fd-2388-4d72-b508-5395a0c3f84a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:344: "netcat-694fc96674-jdvgp" [20e977fd-2388-4d72-b508-5395a0c3f84a] Running
E0128 04:18:27.091426   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.007378329s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-766983 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-766983 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.1: (1m26.665228078s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-877541 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-877541 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)
E0128 04:24:38.853664   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:24:49.758900   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
E0128 04:24:50.178886   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
E0128 04:25:00.118592   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory
E0128 04:25:05.905918   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
E0128 04:25:07.191517   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory
E0128 04:25:07.196809   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory
E0128 04:25:07.207042   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory
E0128 04:25:07.227336   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory
E0128 04:25:07.267646   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory
E0128 04:25:07.347937   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory
E0128 04:25:07.508367   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory
E0128 04:25:07.828918   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory
E0128 04:25:08.469669   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory
E0128 04:25:09.749853   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory
E0128 04:25:12.310215   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory
E0128 04:25:16.756138   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
E0128 04:25:17.430603   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-627187 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.1
E0128 04:18:49.360286   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:18:49.365564   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:18:49.375832   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:18:49.396174   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:18:49.436443   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:18:49.516792   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:18:49.677221   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:18:49.997903   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:18:50.638561   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:18:51.120563   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/auto-877541/client.crt: no such file or directory
E0128 04:18:51.919155   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:18:54.479772   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:18:59.600840   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:19:09.841303   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:19:09.983501   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:19:11.601731   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/auto-877541/client.crt: no such file or directory
E0128 04:19:30.321802   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-627187 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.1: (1m28.63718783s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-766983 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [62c08535-e052-46ea-836d-0d51740ef0e3] Pending
helpers_test.go:344: "busybox" [62c08535-e052-46ea-836d-0d51740ef0e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0128 04:19:49.758950   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/functional-868781/client.crt: no such file or directory
helpers_test.go:344: "busybox" [62c08535-e052-46ea-836d-0d51740ef0e3] Running
E0128 04:19:52.561879   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/auto-877541/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.017666223s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-766983 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-766983 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-766983 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-766983 --alsologtostderr -v=3
E0128 04:20:00.118815   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-766983 --alsologtostderr -v=3: (13.146708117s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-939398 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1562ff15-cf65-4064-ac46-92d86632ba65] Pending

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:344: "busybox" [1562ff15-cf65-4064-ac46-92d86632ba65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:344: "busybox" [1562ff15-cf65-4064-ac46-92d86632ba65] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.017633298s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-939398 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-883473 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [28241f35-b4fe-4ae4-9ff2-e2a26f9fe19d] Pending

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:344: "busybox" [28241f35-b4fe-4ae4-9ff2-e2a26f9fe19d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:344: "busybox" [28241f35-b4fe-4ae4-9ff2-e2a26f9fe19d] Running
E0128 04:20:11.282542   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.023993389s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-883473 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-766983 -n embed-certs-766983
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-766983 -n embed-certs-766983: exit status 7 (103.958673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-766983 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (309.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-766983 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-766983 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.1: (5m9.528701149s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-766983 -n embed-certs-766983
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (309.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-939398 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-939398 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037961455s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-939398 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-939398 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-939398 --alsologtostderr -v=3: (13.144716052s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-883473 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-883473 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-627187 create -f testdata/busybox.yaml
E0128 04:20:16.756172   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E0128 04:20:16.761819   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
E0128 04:20:16.771910   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
helpers_test.go:344: "busybox" [5d37e4dd-61e3-47c7-93d1-8f50c9044068] Pending
E0128 04:20:16.792779   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
E0128 04:20:16.833282   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
E0128 04:20:16.913636   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
E0128 04:20:17.074234   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
helpers_test.go:344: "busybox" [5d37e4dd-61e3-47c7-93d1-8f50c9044068] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0128 04:20:18.036159   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
E0128 04:20:19.316314   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
helpers_test.go:344: "busybox" [5d37e4dd-61e3-47c7-93d1-8f50c9044068] Running
E0128 04:20:21.877185   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.020999519s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-627187 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-883473 --alsologtostderr -v=3
E0128 04:20:17.394989   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-883473 --alsologtostderr -v=3: (13.167863492s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-627187 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-627187 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-627187 --alsologtostderr -v=3
E0128 04:20:26.998126   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
E0128 04:20:27.803626   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/gvisor-252601/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-627187 --alsologtostderr -v=3: (13.133861086s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-939398 -n no-preload-939398
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-939398 -n no-preload-939398: exit status 7 (92.642847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-939398 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (308.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-939398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-939398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.26.1: (5m8.284632117s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-939398 -n no-preload-939398
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (308.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-883473 -n old-k8s-version-883473
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-883473 -n old-k8s-version-883473: exit status 7 (95.599767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-883473 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (98.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-883473 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E0128 04:20:32.393103   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:20:32.398505   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:20:32.408740   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:20:32.428965   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:20:32.469941   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:20:32.550240   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:20:32.710775   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:20:33.030912   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:20:33.672117   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:20:34.952681   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:20:37.238874   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
E0128 04:20:37.513201   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-883473 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (1m38.689895671s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-883473 -n old-k8s-version-883473
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (98.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-627187 -n default-k8s-diff-port-627187
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-627187 -n default-k8s-diff-port-627187: exit status 7 (88.201551ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-627187 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (357.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-627187 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.1
E0128 04:20:42.633862   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:20:52.874609   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:20:53.681970   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:20:53.687263   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:20:53.697639   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:20:53.717924   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:20:53.758164   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:20:53.838511   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:20:53.998968   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:20:54.319117   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:20:54.959304   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:20:56.240101   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:20:57.719117   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
E0128 04:20:58.800262   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:21:03.921261   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:21:13.355505   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:21:14.161733   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:21:14.482071   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/auto-877541/client.crt: no such file or directory
E0128 04:21:22.724368   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 04:21:33.203141   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:21:34.642647   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:21:38.680265   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
E0128 04:21:54.316493   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:22:06.337216   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
E0128 04:22:06.342489   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
E0128 04:22:06.352795   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
E0128 04:22:06.373107   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
E0128 04:22:06.413355   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
E0128 04:22:06.493687   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
E0128 04:22:06.653968   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
E0128 04:22:06.974871   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
E0128 04:22:07.615136   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
E0128 04:22:08.895474   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-627187 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.1: (5m56.877726792s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-627187 -n default-k8s-diff-port-627187
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (357.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-l9ck6" [87c07023-b331-4878-bd90-b69c62378095] Running
E0128 04:22:11.456592   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014220791s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-l9ck6" [87c07023-b331-4878-bd90-b69c62378095] Running
E0128 04:22:15.603280   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
E0128 04:22:16.577454   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007175994s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-883473 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-883473 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-883473 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-883473 -n old-k8s-version-883473
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-883473 -n old-k8s-version-883473: exit status 2 (255.893537ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-883473 -n old-k8s-version-883473
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-883473 -n old-k8s-version-883473: exit status 2 (262.810734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-883473 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-883473 -n old-k8s-version-883473
E0128 04:22:22.061954   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
E0128 04:22:22.067276   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
E0128 04:22:22.077565   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
E0128 04:22:22.097822   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
E0128 04:22:22.138130   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-883473 -n old-k8s-version-883473
E0128 04:22:22.218820   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
E0128 04:22:22.379252   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (74.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-367677 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.26.1
E0128 04:22:24.621755   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
E0128 04:22:26.817902   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
E0128 04:22:27.182820   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
E0128 04:22:32.303152   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
E0128 04:22:42.543693   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
E0128 04:22:47.298076   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
E0128 04:22:48.773114   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:22:48.778343   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:22:48.788548   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:22:48.808811   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:22:48.849040   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:22:48.929426   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:22:49.090295   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:22:49.410956   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:22:50.052053   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:22:51.332399   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:22:53.893244   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:22:59.013419   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:23:00.601334   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
E0128 04:23:03.024573   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
E0128 04:23:09.253876   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:23:16.236753   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
E0128 04:23:16.931343   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:16.936646   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:16.946893   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:16.967160   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:17.007452   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:17.087735   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:17.248497   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:17.568591   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:18.208839   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:19.489412   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:19.675754   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/ingress-addon-legacy-161764/client.crt: no such file or directory
E0128 04:23:22.050439   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:27.091601   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/addons-722117/client.crt: no such file or directory
E0128 04:23:27.170858   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:28.258544   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/enable-default-cni-877541/client.crt: no such file or directory
E0128 04:23:29.734590   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:23:30.638606   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/auto-877541/client.crt: no such file or directory
E0128 04:23:37.411703   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:37.524021   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/false-877541/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-367677 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.26.1: (1m14.555072238s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (74.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-367677 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-367677 --alsologtostderr -v=3
E0128 04:23:43.985751   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/flannel-877541/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-367677 --alsologtostderr -v=3: (8.133602577s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-367677 -n newest-cni-367677
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-367677 -n newest-cni-367677: exit status 7 (104.816677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-367677 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (46.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-367677 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.26.1
E0128 04:23:49.360946   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
E0128 04:23:57.892699   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kubenet-877541/client.crt: no such file or directory
E0128 04:23:58.323289   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/auto-877541/client.crt: no such file or directory
E0128 04:24:09.982794   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/skaffold-149845/client.crt: no such file or directory
E0128 04:24:10.694994   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/bridge-877541/client.crt: no such file or directory
E0128 04:24:17.043671   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/kindnet-877541/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-367677 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.26.1: (46.498941758s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-367677 -n newest-cni-367677
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (46.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-367677 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-367677 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-367677 -n newest-cni-367677
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-367677 -n newest-cni-367677: exit status 2 (258.727036ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-367677 -n newest-cni-367677
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-367677 -n newest-cni-367677: exit status 2 (280.110299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-367677 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-367677 -n newest-cni-367677
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-367677 -n newest-cni-367677
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-qb99k" [5b6817c6-5c9d-42fd-85d7-d7c2ef8a35da] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013742157s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-qb99k" [5b6817c6-5c9d-42fd-85d7-d7c2ef8a35da] Running
E0128 04:25:27.671744   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006997293s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-766983 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-766983 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-766983 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-766983 -n embed-certs-766983
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-766983 -n embed-certs-766983: exit status 2 (277.543104ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-766983 -n embed-certs-766983
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-766983 -n embed-certs-766983: exit status 2 (258.484593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-766983 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-766983 -n embed-certs-766983
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-766983 -n embed-certs-766983
E0128 04:25:32.393608   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/custom-flannel-877541/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-q4hww" [9e5b5719-0f98-4959-80d4-c6e9ef42f27c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011417871s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-q4hww" [9e5b5719-0f98-4959-80d4-c6e9ef42f27c] Running
E0128 04:25:44.442008   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/calico-877541/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007376394s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-939398 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-939398 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-939398 --alsologtostderr -v=1
E0128 04:25:48.152265   11062 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3903/.minikube/profiles/old-k8s-version-883473/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-939398 -n no-preload-939398
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-939398 -n no-preload-939398: exit status 2 (246.498449ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-939398 -n no-preload-939398
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-939398 -n no-preload-939398: exit status 2 (248.18474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-939398 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-939398 -n no-preload-939398
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-939398 -n no-preload-939398
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-8944w" [8203ebe7-e46c-4d02-890e-2a878580973d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-8944w" [8203ebe7-e46c-4d02-890e-2a878580973d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.013586537s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-8944w" [8203ebe7-e46c-4d02-890e-2a878580973d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008650376s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-627187 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-627187 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-627187 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-627187 -n default-k8s-diff-port-627187
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-627187 -n default-k8s-diff-port-627187: exit status 2 (252.979204ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-627187 -n default-k8s-diff-port-627187
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-627187 -n default-k8s-diff-port-627187: exit status 2 (251.36249ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-627187 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-627187 -n default-k8s-diff-port-627187
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-627187 -n default-k8s-diff-port-627187
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.36s)

                                                
                                    

Test skip (29/300)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:292: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium
panic.go:522: 
----------------------- debugLogs start: cilium-877541 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-877541" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-877541

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-877541" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877541"

                                                
                                                
----------------------- debugLogs end: cilium-877541 [took: 4.584210208s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-877541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-877541
--- SKIP: TestNetworkPlugins/group/cilium (4.77s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-466953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-466953
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard