Test Report: Docker_Linux_containerd 14420

                    
                      7d3b93abdd89ce8ebba3c81494e660414100c7c4:2022-06-29:24669
                    
                

Test fail (5/275)

Order failed test Duration
71 TestFunctional/serial/LogsFileCmd 1.1
211 TestKubernetesUpgrade 566.86
312 TestNetworkPlugins/group/calico/Start 517.58
329 TestNetworkPlugins/group/bridge/DNS 365.47
332 TestNetworkPlugins/group/enable-default-cni/DNS 347.25
x
+
TestFunctional/serial/LogsFileCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 logs --file /tmp/TestFunctionalserialLogsFileCmd4025904781/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-linux-amd64 -p functional-20220629175813-10091 logs --file /tmp/TestFunctionalserialLogsFileCmd4025904781/001/logs.txt: (1.098375884s)
functional_test.go:1247: expected empty minikube logs output, but got: 
***
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 18:00:11.118406   38720 logs.go:192] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 6d2d40ff49d9cd50fb130befbae452df03cf779e659faad69dbedb0ad85b785b" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 6d2d40ff49d9cd50fb130befbae452df03cf779e659faad69dbedb0ad85b785b": Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-29T18:00:11Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-functional-20220629175813-10091_bbd6fbb5c2c7d50955cc10940694febc/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-functional-20220629175813-10091_bbd6fbb5c2c7d50955cc10940694febc/kube-controller-manager/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2022-06-29T18:00:11Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-controller-manager-functional-20220629175813-10091_bbd6fbb5c2c7d50955cc10940694febc/kube-controller-manager/1.log\\\": lstat /var/log/pods/kube-system_kube-controller-manager-functional-20220629175813-10091_bbd6fbb5c2c7d50955cc10940694febc/kube-controller-manager/1.log: no such file or directory\"\n\n** /stderr **"
	E0629 18:00:11.201955   38720 logs.go:192] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 ab9db04b6f78c24fd5b24c0145b582aa5bc2642cb0c9ccd5df32832f4661bef7" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 ab9db04b6f78c24fd5b24c0145b582aa5bc2642cb0c9ccd5df32832f4661bef7": Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-29T18:00:11Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-proxy-2p22h_e18f5e2e-581c-4748-87fe-9e587fe42957/kube-proxy/1.log\": lstat /var/log/pods/kube-system_kube-proxy-2p22h_e18f5e2e-581c-4748-87fe-9e587fe42957/kube-proxy/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2022-06-29T18:00:11Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-proxy-2p22h_e18f5e2e-581c-4748-87fe-9e587fe42957/kube-proxy/1.log\\\": lstat /var/log/pods/kube-system_kube-proxy-2p22h_e18f5e2e-581c-4748-87fe-9e587fe42957/kube-proxy/1.log: no such file or directory\"\n\n** /stderr **"
	E0629 18:00:11.255769   38720 logs.go:192] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 f98eb1b693c8873cb0ce470e716952b63ad539039727d9c1541030ed488dc403" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 f98eb1b693c8873cb0ce470e716952b63ad539039727d9c1541030ed488dc403": Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-29T18:00:11Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-scheduler-functional-20220629175813-10091_343c7e1bdf3e76635b1cfa8cff2fbf74/kube-scheduler/1.log\": lstat /var/log/pods/kube-system_kube-scheduler-functional-20220629175813-10091_343c7e1bdf3e76635b1cfa8cff2fbf74/kube-scheduler/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2022-06-29T18:00:11Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-scheduler-functional-20220629175813-10091_343c7e1bdf3e76635b1cfa8cff2fbf74/kube-scheduler/1.log\\\": lstat /var/log/pods/kube-system_kube-scheduler-functional-20220629175813-10091_343c7e1bdf3e76635b1cfa8cff2fbf74/kube-scheduler/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: kube-controller-manager [6d2d40ff49d9cd50fb130befbae452df03cf779e659faad69dbedb0ad85b785b], kube-proxy [ab9db04b6f78c24fd5b24c0145b582aa5bc2642cb0c9ccd5df32832f4661bef7], kube-scheduler [f98eb1b693c8873cb0ce470e716952b63ad539039727d9c1541030ed488dc403]

                                                
                                                
** /stderr *****
--- FAIL: TestFunctional/serial/LogsFileCmd (1.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (566.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220629182055-10091 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220629182055-10091 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (45.292325928s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220629182055-10091
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220629182055-10091: (1.392461999s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220629182055-10091 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220629182055-10091 status --format={{.Host}}: exit status 7 (120.189907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220629182055-10091 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220629182055-10091 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (8m35.899221123s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220629182055-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-20220629182055-10091 in cluster kubernetes-upgrade-20220629182055-10091
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-20220629182055-10091" ...
	* Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Jun 29 18:30:17 kubernetes-upgrade-20220629182055-10091 kubelet[11610]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 18:21:42.173009  156452 out.go:296] Setting OutFile to fd 1 ...
	I0629 18:21:42.173155  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:21:42.173167  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:21:42.173174  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:21:42.173670  156452 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 18:21:42.174612  156452 out.go:303] Setting JSON to false
	I0629 18:21:42.176219  156452 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3852,"bootTime":1656523050,"procs":526,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1033-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0629 18:21:42.176311  156452 start.go:125] virtualization: kvm guest
	I0629 18:21:42.178562  156452 out.go:177] * [kubernetes-upgrade-20220629182055-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0629 18:21:42.181717  156452 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 18:21:42.180479  156452 notify.go:193] Checking for updates...
	I0629 18:21:42.180541  156452 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0629 18:21:42.184473  156452 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 18:21:42.185801  156452 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 18:21:42.187266  156452 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 18:21:42.188711  156452 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0629 18:21:42.190278  156452 config.go:178] Loaded profile config "kubernetes-upgrade-20220629182055-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0629 18:21:42.190664  156452 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 18:21:42.253809  156452 docker.go:137] docker version: linux-20.10.17
	I0629 18:21:42.253908  156452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:21:42.347915  156452 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-containerd-overlay2-amd64.tar.lz4.checksum
	I0629 18:21:42.414916  156452 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:50 SystemTime:2022-06-29 18:21:42.29799628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1033-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:21:42.415059  156452 docker.go:254] overlay module found
	I0629 18:21:42.417454  156452 out.go:177] * Using the docker driver based on existing profile
	I0629 18:21:42.419072  156452 start.go:284] selected driver: docker
	I0629 18:21:42.419091  156452 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-20220629182055-10091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220629182055-
10091 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:21:42.419195  156452 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 18:21:42.430458  156452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:21:42.620521  156452 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:69 SystemTime:2022-06-29 18:21:42.472805654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1033-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:21:42.620798  156452 cni.go:95] Creating CNI manager for ""
	I0629 18:21:42.620817  156452 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0629 18:21:42.620829  156452 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220629182055-10091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220629182055-10091 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:21:42.622964  156452 out.go:177] * Starting control plane node kubernetes-upgrade-20220629182055-10091 in cluster kubernetes-upgrade-20220629182055-10091
	I0629 18:21:42.624920  156452 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0629 18:21:42.627466  156452 out.go:177] * Pulling base image ...
	I0629 18:21:42.629094  156452 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0629 18:21:42.629148  156452 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0629 18:21:42.629174  156452 cache.go:57] Caching tarball of preloaded images
	I0629 18:21:42.629182  156452 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 18:21:42.629464  156452 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 18:21:42.629495  156452 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0629 18:21:42.629642  156452 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629182055-10091/config.json ...
	I0629 18:21:42.671324  156452 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 18:21:42.671353  156452 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 18:21:42.671369  156452 cache.go:208] Successfully downloaded all kic artifacts
	I0629 18:21:42.671415  156452 start.go:352] acquiring machines lock for kubernetes-upgrade-20220629182055-10091: {Name:mk3d29560e96c14164afffce0c63e67af62937a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 18:21:42.671531  156452 start.go:356] acquired machines lock for "kubernetes-upgrade-20220629182055-10091" in 94.942µs
	I0629 18:21:42.671556  156452 start.go:94] Skipping create...Using existing machine configuration
	I0629 18:21:42.671561  156452 fix.go:55] fixHost starting: 
	I0629 18:21:42.671804  156452 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220629182055-10091 --format={{.State.Status}}
	I0629 18:21:42.710487  156452 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220629182055-10091: state=Stopped err=<nil>
	W0629 18:21:42.710528  156452 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 18:21:42.712798  156452 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220629182055-10091" ...
	I0629 18:21:42.714249  156452 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220629182055-10091
	I0629 18:21:43.262994  156452 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220629182055-10091 --format={{.State.Status}}
	I0629 18:21:43.308679  156452 kic.go:416] container "kubernetes-upgrade-20220629182055-10091" state is running.
	I0629 18:21:43.309248  156452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220629182055-10091
	I0629 18:21:43.352195  156452 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629182055-10091/config.json ...
	I0629 18:21:43.352416  156452 machine.go:88] provisioning docker machine ...
	I0629 18:21:43.352438  156452 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220629182055-10091"
	I0629 18:21:43.352482  156452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629182055-10091
	I0629 18:21:43.395731  156452 main.go:134] libmachine: Using SSH client type: native
	I0629 18:21:43.395936  156452 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49337 <nil> <nil>}
	I0629 18:21:43.395964  156452 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220629182055-10091 && echo "kubernetes-upgrade-20220629182055-10091" | sudo tee /etc/hostname
	I0629 18:21:43.396964  156452 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38170->127.0.0.1:49337: read: connection reset by peer
	I0629 18:21:46.540144  156452 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220629182055-10091
	
	I0629 18:21:46.540222  156452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629182055-10091
	I0629 18:21:46.588393  156452 main.go:134] libmachine: Using SSH client type: native
	I0629 18:21:46.588658  156452 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49337 <nil> <nil>}
	I0629 18:21:46.588698  156452 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220629182055-10091' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220629182055-10091/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220629182055-10091' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 18:21:46.712437  156452 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 18:21:46.712493  156452 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 18:21:46.712529  156452 ubuntu.go:177] setting up certificates
	I0629 18:21:46.712538  156452 provision.go:83] configureAuth start
	I0629 18:21:46.712583  156452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220629182055-10091
	I0629 18:21:46.744750  156452 provision.go:138] copyHostCerts
	I0629 18:21:46.744841  156452 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 18:21:46.744875  156452 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 18:21:46.819632  156452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1078 bytes)
	I0629 18:21:46.819838  156452 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 18:21:46.819855  156452 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 18:21:46.819907  156452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 18:21:46.820000  156452 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 18:21:46.820016  156452 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 18:21:46.820047  156452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1679 bytes)
	I0629 18:21:46.820132  156452 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220629182055-10091 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220629182055-10091]
	I0629 18:21:47.204394  156452 provision.go:172] copyRemoteCerts
	I0629 18:21:47.455866  156452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 18:21:47.455906  156452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629182055-10091
	I0629 18:21:47.489264  156452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629182055-10091/id_rsa Username:docker}
	I0629 18:21:47.576113  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0629 18:21:47.593352  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0629 18:21:47.609926  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 18:21:47.628158  156452 provision.go:86] duration metric: configureAuth took 915.606595ms
	I0629 18:21:47.628190  156452 ubuntu.go:193] setting minikube options for container-runtime
	I0629 18:21:47.628373  156452 config.go:178] Loaded profile config "kubernetes-upgrade-20220629182055-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:21:47.628411  156452 machine.go:91] provisioned docker machine in 4.275969622s
	I0629 18:21:47.628426  156452 start.go:306] post-start starting for "kubernetes-upgrade-20220629182055-10091" (driver="docker")
	I0629 18:21:47.628438  156452 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 18:21:47.628483  156452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 18:21:47.628529  156452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629182055-10091
	I0629 18:21:47.659448  156452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629182055-10091/id_rsa Username:docker}
	I0629 18:21:47.744109  156452 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 18:21:47.746723  156452 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 18:21:47.746745  156452 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 18:21:47.746753  156452 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 18:21:47.746758  156452 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 18:21:47.746766  156452 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 18:21:47.746805  156452 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 18:21:47.746870  156452 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/100912.pem -> 100912.pem in /etc/ssl/certs
	I0629 18:21:47.746947  156452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 18:21:47.755311  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/100912.pem --> /etc/ssl/certs/100912.pem (1708 bytes)
	I0629 18:21:47.881208  156452 start.go:309] post-start completed in 252.76116ms
	I0629 18:21:47.881332  156452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 18:21:47.881405  156452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629182055-10091
	I0629 18:21:47.924497  156452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629182055-10091/id_rsa Username:docker}
	I0629 18:21:48.009591  156452 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 18:21:48.013429  156452 fix.go:57] fixHost completed within 5.341863666s
	I0629 18:21:48.013456  156452 start.go:81] releasing machines lock for "kubernetes-upgrade-20220629182055-10091", held for 5.341908877s
	I0629 18:21:48.013543  156452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220629182055-10091
	I0629 18:21:48.045315  156452 ssh_runner.go:195] Run: systemctl --version
	I0629 18:21:48.045369  156452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629182055-10091
	I0629 18:21:48.045375  156452 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 18:21:48.045427  156452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629182055-10091
	I0629 18:21:48.077902  156452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629182055-10091/id_rsa Username:docker}
	I0629 18:21:48.078301  156452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629182055-10091/id_rsa Username:docker}
	I0629 18:21:48.160984  156452 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0629 18:21:48.182328  156452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 18:21:48.191198  156452 docker.go:179] disabling docker service ...
	I0629 18:21:48.191246  156452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0629 18:21:48.200271  156452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0629 18:21:48.208692  156452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0629 18:21:48.282800  156452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0629 18:21:48.355351  156452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0629 18:21:48.364647  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 18:21:48.483704  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0629 18:21:48.493629  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0629 18:21:48.503093  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0629 18:21:48.512083  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0629 18:21:48.521239  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0629 18:21:48.531496  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0629 18:21:48.552501  156452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0629 18:21:48.567088  156452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0629 18:21:48.578096  156452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 18:21:48.680450  156452 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0629 18:21:48.783732  156452 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0629 18:21:48.783797  156452 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0629 18:21:48.788213  156452 start.go:468] Will wait 60s for crictl version
	I0629 18:21:48.788265  156452 ssh_runner.go:195] Run: sudo crictl version
	I0629 18:21:48.876122  156452 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-29T18:21:48Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0629 18:21:59.924977  156452 ssh_runner.go:195] Run: sudo crictl version
	I0629 18:21:59.950732  156452 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0629 18:21:59.950786  156452 ssh_runner.go:195] Run: containerd --version
	I0629 18:21:59.986986  156452 ssh_runner.go:195] Run: containerd --version
	I0629 18:22:00.027752  156452 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0629 18:22:00.029167  156452 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220629182055-10091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 18:22:00.075639  156452 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0629 18:22:00.078906  156452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 18:22:00.091267  156452 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0629 18:22:00.092936  156452 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0629 18:22:00.092994  156452 ssh_runner.go:195] Run: sudo crictl images --output json
	I0629 18:22:00.118956  156452 containerd.go:543] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.2". assuming images are not preloaded.
	I0629 18:22:00.119008  156452 ssh_runner.go:195] Run: which lz4
	I0629 18:22:00.121928  156452 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0629 18:22:00.125063  156452 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0629 18:22:00.125095  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (447741112 bytes)
	I0629 18:22:01.069562  156452 containerd.go:490] Took 0.947661 seconds to copy over tarball
	I0629 18:22:01.069629  156452 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0629 18:22:03.826421  156452 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.756764995s)
	I0629 18:22:03.826453  156452 containerd.go:497] Took 2.756861 seconds t extract the tarball
	I0629 18:22:03.826464  156452 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0629 18:22:03.907735  156452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 18:22:04.004758  156452 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0629 18:22:04.113324  156452 ssh_runner.go:195] Run: sudo crictl images --output json
	I0629 18:22:04.152634  156452 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.2 k8s.gcr.io/kube-controller-manager:v1.24.2 k8s.gcr.io/kube-scheduler:v1.24.2 k8s.gcr.io/kube-proxy:v1.24.2 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0629 18:22:04.152728  156452 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 18:22:04.152795  156452 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.2
	I0629 18:22:04.152817  156452 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I0629 18:22:04.152826  156452 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0629 18:22:04.152829  156452 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.2
	I0629 18:22:04.152973  156452 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.2
	I0629 18:22:04.152796  156452 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I0629 18:22:04.153022  156452 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.2
	I0629 18:22:04.154120  156452 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.2: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.2
	I0629 18:22:04.154131  156452 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.2: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.2
	I0629 18:22:04.154170  156452 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.2: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.2
	I0629 18:22:04.154212  156452 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I0629 18:22:04.154134  156452 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 18:22:04.154297  156452 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.2: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.2
	I0629 18:22:04.154712  156452 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I0629 18:22:04.154846  156452 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0629 18:22:04.365854  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.2"
	I0629 18:22:04.365949  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.2"
	I0629 18:22:04.382164  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.2"
	I0629 18:22:04.455402  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I0629 18:22:04.455993  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I0629 18:22:04.461610  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.2"
	I0629 18:22:04.513443  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0629 18:22:04.651937  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I0629 18:22:05.074305  156452 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.2" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.2" does not exist at hash "34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df" in container runtime
	I0629 18:22:05.074437  156452 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.2
	I0629 18:22:05.074474  156452 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.2" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.2" does not exist at hash "d3377ffb7177cc4becce8a534d8547aca9530cb30fac9ebe479b31102f1ba503" in container runtime
	I0629 18:22:05.074500  156452 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.2
	I0629 18:22:05.074533  156452 ssh_runner.go:195] Run: which crictl
	I0629 18:22:05.074533  156452 ssh_runner.go:195] Run: which crictl
	I0629 18:22:05.094627  156452 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.2" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.2" does not exist at hash "a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536" in container runtime
	I0629 18:22:05.094741  156452 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.2
	I0629 18:22:05.094796  156452 ssh_runner.go:195] Run: which crictl
	I0629 18:22:05.148462  156452 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0629 18:22:05.148549  156452 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I0629 18:22:05.148600  156452 ssh_runner.go:195] Run: which crictl
	I0629 18:22:05.176689  156452 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0629 18:22:05.176746  156452 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I0629 18:22:05.176783  156452 ssh_runner.go:195] Run: which crictl
	I0629 18:22:05.187861  156452 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.2" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.2" does not exist at hash "5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3663ac" in container runtime
	I0629 18:22:05.187912  156452 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.2
	I0629 18:22:05.187948  156452 ssh_runner.go:195] Run: which crictl
	I0629 18:22:05.217657  156452 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0629 18:22:05.217702  156452 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 18:22:05.217739  156452 ssh_runner.go:195] Run: which crictl
	I0629 18:22:05.254306  156452 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0629 18:22:05.254355  156452 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0629 18:22:05.254362  156452 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.2
	I0629 18:22:05.254392  156452 ssh_runner.go:195] Run: which crictl
	I0629 18:22:05.254441  156452 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.2
	I0629 18:22:05.254477  156452 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.2
	I0629 18:22:05.254531  156452 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I0629 18:22:05.254595  156452 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I0629 18:22:05.254663  156452 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.2
	I0629 18:22:05.254679  156452 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 18:22:05.713403  156452 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2
	I0629 18:22:05.713506  156452 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.2
	I0629 18:22:05.713562  156452 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I0629 18:22:05.713601  156452 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I0629 18:22:05.713646  156452 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0629 18:22:05.713674  156452 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I0629 18:22:05.713710  156452 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0629 18:22:05.713806  156452 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2
	I0629 18:22:05.713869  156452 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.2
	I0629 18:22:05.717846  156452 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2
	I0629 18:22:05.717997  156452 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.2
	I0629 18:22:05.718775  156452 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2
	I0629 18:22:05.718852  156452 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.2
	I0629 18:22:05.718910  156452 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0629 18:22:05.718961  156452 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0629 18:22:05.719557  156452 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.24.2': No such file or directory
	I0629 18:22:05.719590  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 --> /var/lib/minikube/images/kube-apiserver_v1.24.2 (33798144 bytes)
	I0629 18:22:05.730387  156452 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.24.2': No such file or directory
	I0629 18:22:05.730421  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 --> /var/lib/minikube/images/kube-controller-manager_v1.24.2 (31037952 bytes)
	I0629 18:22:05.800215  156452 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I0629 18:22:05.800351  156452 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0629 18:22:05.800448  156452 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0629 18:22:05.800491  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0629 18:22:05.800578  156452 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.24.2': No such file or directory
	I0629 18:22:05.800596  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 --> /var/lib/minikube/images/kube-scheduler_v1.24.2 (15491584 bytes)
	I0629 18:22:05.800637  156452 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0629 18:22:05.800653  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I0629 18:22:05.800652  156452 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0629 18:22:05.800710  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0629 18:22:05.800555  156452 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.24.2': No such file or directory
	I0629 18:22:05.800774  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 --> /var/lib/minikube/images/kube-proxy_v1.24.2 (39518208 bytes)
	I0629 18:22:05.828464  156452 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0629 18:22:05.828495  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0629 18:22:05.887131  156452 containerd.go:227] Loading image: /var/lib/minikube/images/pause_3.7
	I0629 18:22:05.887191  156452 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I0629 18:22:06.167193  156452 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I0629 18:22:06.167243  156452 containerd.go:227] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0629 18:22:06.167289  156452 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0629 18:22:06.813106  156452 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0629 18:22:06.813143  156452 containerd.go:227] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.2
	I0629 18:22:06.813188  156452 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.2
	I0629 18:22:07.680132  156452 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 from cache
	I0629 18:22:07.680184  156452 containerd.go:227] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0629 18:22:07.680239  156452 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I0629 18:22:08.378902  156452 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I0629 18:22:08.378948  156452 containerd.go:227] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.2
	I0629 18:22:08.379027  156452 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.2
	I0629 18:22:12.463882  156452 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.2: (4.084809751s)
	I0629 18:22:12.463916  156452 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 from cache
	I0629 18:22:12.463951  156452 containerd.go:227] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.2
	I0629 18:22:12.464000  156452 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.2
	I0629 18:22:15.778074  156452 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.2: (3.314035597s)
	I0629 18:22:15.778107  156452 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 from cache
	I0629 18:22:15.778142  156452 containerd.go:227] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.2
	I0629 18:22:15.778192  156452 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.2
	I0629 18:22:16.892492  156452 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.2: (1.114277538s)
	I0629 18:22:16.892525  156452 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 from cache
	I0629 18:22:16.892545  156452 containerd.go:227] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0629 18:22:16.892590  156452 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I0629 18:22:21.063664  156452 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (4.171047415s)
	I0629 18:22:21.063686  156452 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
	I0629 18:22:21.063710  156452 cache_images.go:123] Successfully loaded all cached images
	I0629 18:22:21.063715  156452 cache_images.go:92] LoadImages completed in 16.911050793s
	I0629 18:22:21.063753  156452 ssh_runner.go:195] Run: sudo crictl info
	I0629 18:22:21.102938  156452 cni.go:95] Creating CNI manager for ""
	I0629 18:22:21.102970  156452 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0629 18:22:21.102987  156452 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 18:22:21.103007  156452 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220629182055-10091 NodeName:kubernetes-upgrade-20220629182055-10091 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:
cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 18:22:21.103199  156452 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-20220629182055-10091"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 18:22:21.103326  156452 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-20220629182055-10091 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220629182055-10091 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 18:22:21.103391  156452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 18:22:21.112247  156452 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 18:22:21.112310  156452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 18:22:21.119550  156452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (563 bytes)
	I0629 18:22:21.132319  156452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 18:22:21.146982  156452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I0629 18:22:21.160129  156452 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0629 18:22:21.163945  156452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 18:22:21.183934  156452 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629182055-10091 for IP: 192.168.67.2
	I0629 18:22:21.184069  156452 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 18:22:21.184131  156452 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 18:22:21.184222  156452 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629182055-10091/client.key
	I0629 18:22:21.184320  156452 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629182055-10091/apiserver.key.c7fa3a9e
	I0629 18:22:21.184396  156452 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629182055-10091/proxy-client.key
	I0629 18:22:21.184528  156452 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/10091.pem (1338 bytes)
	W0629 18:22:21.184572  156452 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/10091_empty.pem, impossibly tiny 0 bytes
	I0629 18:22:21.184584  156452 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1675 bytes)
	I0629 18:22:21.184613  156452 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1078 bytes)
	I0629 18:22:21.184637  156452 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 18:22:21.184658  156452 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1679 bytes)
	I0629 18:22:21.184724  156452 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/100912.pem (1708 bytes)
	I0629 18:22:21.185395  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629182055-10091/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 18:22:21.244805  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629182055-10091/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 18:22:21.263681  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629182055-10091/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 18:22:21.287451  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629182055-10091/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0629 18:22:21.309339  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 18:22:21.400289  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0629 18:22:21.427107  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 18:22:21.447287  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0629 18:22:21.466351  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/10091.pem --> /usr/share/ca-certificates/10091.pem (1338 bytes)
	I0629 18:22:21.485238  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/100912.pem --> /usr/share/ca-certificates/100912.pem (1708 bytes)
	I0629 18:22:21.505506  156452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 18:22:21.529554  156452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 18:22:21.552352  156452 ssh_runner.go:195] Run: openssl version
	I0629 18:22:21.558260  156452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100912.pem && ln -fs /usr/share/ca-certificates/100912.pem /etc/ssl/certs/100912.pem"
	I0629 18:22:21.566592  156452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100912.pem
	I0629 18:22:21.570091  156452 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/100912.pem
	I0629 18:22:21.570142  156452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100912.pem
	I0629 18:22:21.575365  156452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100912.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 18:22:21.583020  156452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 18:22:21.590428  156452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 18:22:21.594085  156452 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:53 /usr/share/ca-certificates/minikubeCA.pem
	I0629 18:22:21.594129  156452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 18:22:21.599697  156452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 18:22:21.606665  156452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10091.pem && ln -fs /usr/share/ca-certificates/10091.pem /etc/ssl/certs/10091.pem"
	I0629 18:22:21.614203  156452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10091.pem
	I0629 18:22:21.617543  156452 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/10091.pem
	I0629 18:22:21.617592  156452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10091.pem
	I0629 18:22:21.622578  156452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10091.pem /etc/ssl/certs/51391683.0"
	I0629 18:22:21.630798  156452 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220629182055-10091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220629182055-10091 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:22:21.630913  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0629 18:22:21.630959  156452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0629 18:22:21.654927  156452 cri.go:87] found id: ""
	I0629 18:22:21.655002  156452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 18:22:21.662518  156452 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 18:22:21.662545  156452 kubeadm.go:626] restartCluster start
	I0629 18:22:21.662591  156452 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 18:22:21.669514  156452 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:22:21.669875  156452 kubeconfig.go:116] verify returned: extract IP: "kubernetes-upgrade-20220629182055-10091" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 18:22:21.669973  156452 kubeconfig.go:127] "kubernetes-upgrade-20220629182055-10091" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 18:22:21.670332  156452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk893d9eb214a7622f390991dc9e953bb49b2322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 18:22:21.671133  156452 kapi.go:59] client config for kubernetes-upgrade-20220629182055-10091: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629182055-10091/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikub
e/profiles/kubernetes-upgrade-20220629182055-10091/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 18:22:21.671680  156452 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 18:22:21.710006  156452 kubeadm.go:593] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-06-29 18:21:08.020944383 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-06-29 18:22:21.156486999 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.67.2
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.67.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-20220629182055-10091
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.24.2
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0629 18:22:21.710025  156452 kubeadm.go:1092] stopping kube-system containers ...
	I0629 18:22:21.710035  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0629 18:22:21.710073  156452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0629 18:22:21.746026  156452 cri.go:87] found id: ""
	I0629 18:22:21.746093  156452 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 18:22:21.757976  156452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 18:22:21.765432  156452 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5759 Jun 29 18:21 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5795 Jun 29 18:21 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5959 Jun 29 18:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5747 Jun 29 18:21 /etc/kubernetes/scheduler.conf
	
	I0629 18:22:21.765494  156452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0629 18:22:21.772400  156452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0629 18:22:21.779225  156452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0629 18:22:21.785979  156452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0629 18:22:21.793291  156452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 18:22:21.801003  156452 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 18:22:21.801029  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 18:22:21.855602  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 18:22:23.014236  156452 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.158585795s)
	I0629 18:22:23.014268  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 18:22:23.212390  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 18:22:23.271602  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 18:22:23.321594  156452 api_server.go:51] waiting for apiserver process to appear ...
	I0629 18:22:23.321660  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:23.832152  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:24.331723  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:24.832572  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:25.332000  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:25.832444  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:26.332319  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:26.832371  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:27.332316  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:27.832333  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:28.332131  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:28.831596  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:29.332330  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:29.832126  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:30.331745  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:30.832043  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:31.332448  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:31.832245  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:32.332445  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:32.831680  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:33.332577  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:33.832016  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:34.332372  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:34.831863  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:35.332266  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:35.831857  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:36.332349  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:36.832550  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:37.332094  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:37.831624  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:38.332211  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:38.831560  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:39.332390  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:39.832244  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:40.331928  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:40.831999  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:41.332424  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:41.832460  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:42.331966  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:42.832189  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:43.332418  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:43.832479  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:44.331602  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:44.832060  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:45.331723  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:45.831724  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:46.332519  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:46.832106  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:47.331574  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:47.832008  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:48.331907  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:48.832177  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:49.332191  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:49.832452  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:50.331710  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:50.832221  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:51.331970  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:51.832392  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:52.331786  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:52.831539  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:53.331613  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:53.831926  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:54.332178  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:54.832573  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:55.332287  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:55.832183  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:56.331973  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:56.831793  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:57.332060  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:57.832139  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:58.332440  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:58.831602  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:59.331800  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:22:59.832426  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:00.331616  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:00.831753  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:01.332325  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:01.832065  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:02.332509  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:02.831917  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:03.332512  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:03.831764  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:04.331828  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:04.832509  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:05.332451  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:05.831673  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:06.332413  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:06.831532  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:07.332217  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:07.831753  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:08.332084  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:08.832074  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:09.332547  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:09.832301  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:10.331578  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:10.832472  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:11.332222  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:11.831587  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:12.332576  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:12.832304  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:13.332522  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:13.832122  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:14.332448  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:14.831575  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:15.331933  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:15.832364  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:16.331626  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:16.832409  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:17.332345  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:17.832516  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:18.331596  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:18.831682  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:19.331675  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:19.832277  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:20.331579  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:20.831731  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:21.332389  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:21.831598  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:22.332357  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:22.831999  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:23.332406  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:23:23.332482  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:23:23.356196  156452 cri.go:87] found id: ""
	I0629 18:23:23.356221  156452 logs.go:274] 0 containers: []
	W0629 18:23:23.356227  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:23:23.356234  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:23:23.356306  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:23:23.379905  156452 cri.go:87] found id: ""
	I0629 18:23:23.379929  156452 logs.go:274] 0 containers: []
	W0629 18:23:23.379935  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:23:23.379941  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:23:23.379992  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:23:23.403243  156452 cri.go:87] found id: ""
	I0629 18:23:23.403267  156452 logs.go:274] 0 containers: []
	W0629 18:23:23.403275  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:23:23.403284  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:23:23.403335  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:23:23.425309  156452 cri.go:87] found id: ""
	I0629 18:23:23.425332  156452 logs.go:274] 0 containers: []
	W0629 18:23:23.425338  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:23:23.425344  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:23:23.425397  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:23:23.448696  156452 cri.go:87] found id: ""
	I0629 18:23:23.448725  156452 logs.go:274] 0 containers: []
	W0629 18:23:23.448734  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:23:23.448743  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:23:23.448789  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:23:23.471258  156452 cri.go:87] found id: ""
	I0629 18:23:23.471289  156452 logs.go:274] 0 containers: []
	W0629 18:23:23.471297  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:23:23.471303  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:23:23.471348  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:23:23.493472  156452 cri.go:87] found id: ""
	I0629 18:23:23.493501  156452 logs.go:274] 0 containers: []
	W0629 18:23:23.493509  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:23:23.493520  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:23:23.493573  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:23:23.518352  156452 cri.go:87] found id: ""
	I0629 18:23:23.518379  156452 logs.go:274] 0 containers: []
	W0629 18:23:23.518387  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:23:23.518400  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:23:23.518413  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:23:23.563839  156452 logs.go:138] Found kubelet problem: Jun 29 18:23:23 kubernetes-upgrade-20220629182055-10091 kubelet[2329]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:23:23.611869  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:23:23.611899  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:23:23.626239  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:23:23.626266  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:23:23.674843  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:23:23.674867  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:23:23.674879  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:23:23.711197  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:23:23.711236  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:23:23.737366  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:23:23.737393  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:23:23.737538  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:23:23.737564  156452 out.go:239]   Jun 29 18:23:23 kubernetes-upgrade-20220629182055-10091 kubelet[2329]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:23:23 kubernetes-upgrade-20220629182055-10091 kubelet[2329]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:23:23.737579  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:23:23.737587  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:23:33.739081  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:33.831683  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:23:33.831745  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:23:33.855712  156452 cri.go:87] found id: ""
	I0629 18:23:33.855740  156452 logs.go:274] 0 containers: []
	W0629 18:23:33.855749  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:23:33.855757  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:23:33.855805  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:23:33.889235  156452 cri.go:87] found id: ""
	I0629 18:23:33.889259  156452 logs.go:274] 0 containers: []
	W0629 18:23:33.889267  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:23:33.889275  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:23:33.889328  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:23:33.919334  156452 cri.go:87] found id: ""
	I0629 18:23:33.919359  156452 logs.go:274] 0 containers: []
	W0629 18:23:33.919367  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:23:33.919375  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:23:33.919426  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:23:33.945945  156452 cri.go:87] found id: ""
	I0629 18:23:33.945973  156452 logs.go:274] 0 containers: []
	W0629 18:23:33.945982  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:23:33.945990  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:23:33.946039  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:23:33.967383  156452 cri.go:87] found id: ""
	I0629 18:23:33.967415  156452 logs.go:274] 0 containers: []
	W0629 18:23:33.967423  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:23:33.967431  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:23:33.967500  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:23:33.991720  156452 cri.go:87] found id: ""
	I0629 18:23:33.991749  156452 logs.go:274] 0 containers: []
	W0629 18:23:33.991759  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:23:33.991767  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:23:33.991822  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:23:34.023304  156452 cri.go:87] found id: ""
	I0629 18:23:34.023339  156452 logs.go:274] 0 containers: []
	W0629 18:23:34.023348  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:23:34.023356  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:23:34.023415  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:23:34.049955  156452 cri.go:87] found id: ""
	I0629 18:23:34.049978  156452 logs.go:274] 0 containers: []
	W0629 18:23:34.049984  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:23:34.049993  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:23:34.050004  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:23:34.112653  156452 logs.go:138] Found kubelet problem: Jun 29 18:23:33 kubernetes-upgrade-20220629182055-10091 kubelet[2619]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:23:34.186981  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:23:34.187011  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:23:34.202945  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:23:34.202982  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:23:34.255125  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:23:34.255149  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:23:34.255164  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:23:34.290752  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:23:34.290783  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:23:34.316845  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:23:34.316880  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:23:34.317002  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:23:34.317018  156452 out.go:239]   Jun 29 18:23:33 kubernetes-upgrade-20220629182055-10091 kubelet[2619]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:23:33 kubernetes-upgrade-20220629182055-10091 kubelet[2619]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:23:34.317025  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:23:34.317031  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:23:44.318223  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:44.332267  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:23:44.332322  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:23:44.355846  156452 cri.go:87] found id: ""
	I0629 18:23:44.355871  156452 logs.go:274] 0 containers: []
	W0629 18:23:44.355880  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:23:44.355889  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:23:44.355938  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:23:44.386109  156452 cri.go:87] found id: ""
	I0629 18:23:44.386135  156452 logs.go:274] 0 containers: []
	W0629 18:23:44.386144  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:23:44.386152  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:23:44.386206  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:23:44.412389  156452 cri.go:87] found id: ""
	I0629 18:23:44.412416  156452 logs.go:274] 0 containers: []
	W0629 18:23:44.412424  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:23:44.412431  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:23:44.412481  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:23:44.438815  156452 cri.go:87] found id: ""
	I0629 18:23:44.438846  156452 logs.go:274] 0 containers: []
	W0629 18:23:44.438855  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:23:44.438863  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:23:44.438919  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:23:44.469418  156452 cri.go:87] found id: ""
	I0629 18:23:44.469443  156452 logs.go:274] 0 containers: []
	W0629 18:23:44.469451  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:23:44.469459  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:23:44.469510  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:23:44.498226  156452 cri.go:87] found id: ""
	I0629 18:23:44.498254  156452 logs.go:274] 0 containers: []
	W0629 18:23:44.498267  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:23:44.498275  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:23:44.498324  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:23:44.524522  156452 cri.go:87] found id: ""
	I0629 18:23:44.524554  156452 logs.go:274] 0 containers: []
	W0629 18:23:44.524563  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:23:44.524572  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:23:44.524621  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:23:44.579712  156452 cri.go:87] found id: ""
	I0629 18:23:44.579739  156452 logs.go:274] 0 containers: []
	W0629 18:23:44.579748  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:23:44.579761  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:23:44.579779  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:23:44.637326  156452 logs.go:138] Found kubelet problem: Jun 29 18:23:44 kubernetes-upgrade-20220629182055-10091 kubelet[2908]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:23:44.701559  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:23:44.701606  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:23:44.720789  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:23:44.720825  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:23:44.774034  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:23:44.774054  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:23:44.774067  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:23:44.825989  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:23:44.826025  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:23:44.854914  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:23:44.854937  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:23:44.855045  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:23:44.855058  156452 out.go:239]   Jun 29 18:23:44 kubernetes-upgrade-20220629182055-10091 kubelet[2908]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:23:44 kubernetes-upgrade-20220629182055-10091 kubelet[2908]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:23:44.855065  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:23:44.855077  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:23:54.856021  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:23:55.332449  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:23:55.332521  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:23:55.357834  156452 cri.go:87] found id: ""
	I0629 18:23:55.357862  156452 logs.go:274] 0 containers: []
	W0629 18:23:55.357871  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:23:55.357878  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:23:55.357922  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:23:55.387237  156452 cri.go:87] found id: ""
	I0629 18:23:55.387266  156452 logs.go:274] 0 containers: []
	W0629 18:23:55.387275  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:23:55.387282  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:23:55.387331  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:23:55.443021  156452 cri.go:87] found id: ""
	I0629 18:23:55.443046  156452 logs.go:274] 0 containers: []
	W0629 18:23:55.443052  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:23:55.443062  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:23:55.443111  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:23:55.469847  156452 cri.go:87] found id: ""
	I0629 18:23:55.469877  156452 logs.go:274] 0 containers: []
	W0629 18:23:55.469885  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:23:55.469893  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:23:55.469951  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:23:55.503040  156452 cri.go:87] found id: ""
	I0629 18:23:55.503069  156452 logs.go:274] 0 containers: []
	W0629 18:23:55.503077  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:23:55.503094  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:23:55.503155  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:23:55.546974  156452 cri.go:87] found id: ""
	I0629 18:23:55.547000  156452 logs.go:274] 0 containers: []
	W0629 18:23:55.547007  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:23:55.547015  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:23:55.547064  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:23:55.585946  156452 cri.go:87] found id: ""
	I0629 18:23:55.585970  156452 logs.go:274] 0 containers: []
	W0629 18:23:55.585978  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:23:55.585986  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:23:55.586032  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:23:55.616784  156452 cri.go:87] found id: ""
	I0629 18:23:55.616809  156452 logs.go:274] 0 containers: []
	W0629 18:23:55.616818  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:23:55.616829  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:23:55.616844  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:23:55.679219  156452 logs.go:138] Found kubelet problem: Jun 29 18:23:55 kubernetes-upgrade-20220629182055-10091 kubelet[3269]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:23:55.732421  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:23:55.732461  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:23:55.749008  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:23:55.749044  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:23:55.809462  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:23:55.809486  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:23:55.809498  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:23:55.867395  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:23:55.867457  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:23:55.915217  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:23:55.915248  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:23:55.915390  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:23:55.915411  156452 out.go:239]   Jun 29 18:23:55 kubernetes-upgrade-20220629182055-10091 kubelet[3269]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:23:55 kubernetes-upgrade-20220629182055-10091 kubelet[3269]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:23:55.915418  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:23:55.915430  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:24:05.915608  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:24:06.332432  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:24:06.332522  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:24:06.358564  156452 cri.go:87] found id: ""
	I0629 18:24:06.358586  156452 logs.go:274] 0 containers: []
	W0629 18:24:06.358592  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:24:06.358598  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:24:06.358639  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:24:06.381978  156452 cri.go:87] found id: ""
	I0629 18:24:06.382005  156452 logs.go:274] 0 containers: []
	W0629 18:24:06.382014  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:24:06.382022  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:24:06.382075  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:24:06.407254  156452 cri.go:87] found id: ""
	I0629 18:24:06.407288  156452 logs.go:274] 0 containers: []
	W0629 18:24:06.407298  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:24:06.407306  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:24:06.407359  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:24:06.433875  156452 cri.go:87] found id: ""
	I0629 18:24:06.433900  156452 logs.go:274] 0 containers: []
	W0629 18:24:06.433910  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:24:06.433918  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:24:06.433968  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:24:06.462711  156452 cri.go:87] found id: ""
	I0629 18:24:06.462737  156452 logs.go:274] 0 containers: []
	W0629 18:24:06.462745  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:24:06.462753  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:24:06.462803  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:24:06.491031  156452 cri.go:87] found id: ""
	I0629 18:24:06.491062  156452 logs.go:274] 0 containers: []
	W0629 18:24:06.491072  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:24:06.491081  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:24:06.491134  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:24:06.519555  156452 cri.go:87] found id: ""
	I0629 18:24:06.519584  156452 logs.go:274] 0 containers: []
	W0629 18:24:06.519591  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:24:06.519600  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:24:06.519670  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:24:06.547049  156452 cri.go:87] found id: ""
	I0629 18:24:06.547076  156452 logs.go:274] 0 containers: []
	W0629 18:24:06.547084  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:24:06.547094  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:24:06.547106  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:24:06.609032  156452 logs.go:138] Found kubelet problem: Jun 29 18:24:06 kubernetes-upgrade-20220629182055-10091 kubelet[3494]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:24:06.665295  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:24:06.665325  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:24:06.685037  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:24:06.685074  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:24:06.753430  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:24:06.753462  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:24:06.753478  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:24:06.807248  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:24:06.807290  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:24:06.843969  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:24:06.843999  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:24:06.844132  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:24:06.844148  156452 out.go:239]   Jun 29 18:24:06 kubernetes-upgrade-20220629182055-10091 kubelet[3494]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:24:06 kubernetes-upgrade-20220629182055-10091 kubelet[3494]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:24:06.844156  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:24:06.844163  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:24:16.846009  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:24:17.332361  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:24:17.332423  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:24:17.355721  156452 cri.go:87] found id: ""
	I0629 18:24:17.355748  156452 logs.go:274] 0 containers: []
	W0629 18:24:17.355757  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:24:17.355765  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:24:17.355808  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:24:17.378960  156452 cri.go:87] found id: ""
	I0629 18:24:17.378988  156452 logs.go:274] 0 containers: []
	W0629 18:24:17.378997  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:24:17.379005  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:24:17.379057  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:24:17.402548  156452 cri.go:87] found id: ""
	I0629 18:24:17.402571  156452 logs.go:274] 0 containers: []
	W0629 18:24:17.402577  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:24:17.402586  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:24:17.402639  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:24:17.426353  156452 cri.go:87] found id: ""
	I0629 18:24:17.426386  156452 logs.go:274] 0 containers: []
	W0629 18:24:17.426394  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:24:17.426403  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:24:17.426452  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:24:17.448169  156452 cri.go:87] found id: ""
	I0629 18:24:17.448196  156452 logs.go:274] 0 containers: []
	W0629 18:24:17.448206  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:24:17.448214  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:24:17.448254  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:24:17.471735  156452 cri.go:87] found id: ""
	I0629 18:24:17.471755  156452 logs.go:274] 0 containers: []
	W0629 18:24:17.471760  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:24:17.471767  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:24:17.471819  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:24:17.494638  156452 cri.go:87] found id: ""
	I0629 18:24:17.494660  156452 logs.go:274] 0 containers: []
	W0629 18:24:17.494668  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:24:17.494680  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:24:17.494728  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:24:17.517961  156452 cri.go:87] found id: ""
	I0629 18:24:17.517987  156452 logs.go:274] 0 containers: []
	W0629 18:24:17.517993  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:24:17.518002  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:24:17.518011  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:24:17.552064  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:24:17.552092  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:24:17.578232  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:24:17.578260  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:24:17.629476  156452 logs.go:138] Found kubelet problem: Jun 29 18:24:17 kubernetes-upgrade-20220629182055-10091 kubelet[3794]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:24:17.677400  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:24:17.677428  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:24:17.691591  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:24:17.691615  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:24:17.738730  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:24:17.738755  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:24:17.738767  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:24:17.738884  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:24:17.738895  156452 out.go:239]   Jun 29 18:24:17 kubernetes-upgrade-20220629182055-10091 kubelet[3794]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:24:17 kubernetes-upgrade-20220629182055-10091 kubelet[3794]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:24:17.738900  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:24:17.738905  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:24:27.739459  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:24:27.831910  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:24:27.831984  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:24:27.866058  156452 cri.go:87] found id: ""
	I0629 18:24:27.866083  156452 logs.go:274] 0 containers: []
	W0629 18:24:27.866091  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:24:27.866108  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:24:27.866157  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:24:27.902237  156452 cri.go:87] found id: ""
	I0629 18:24:27.902263  156452 logs.go:274] 0 containers: []
	W0629 18:24:27.902271  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:24:27.902279  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:24:27.902333  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:24:27.931684  156452 cri.go:87] found id: ""
	I0629 18:24:27.931719  156452 logs.go:274] 0 containers: []
	W0629 18:24:27.931729  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:24:27.931738  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:24:27.931795  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:24:27.955988  156452 cri.go:87] found id: ""
	I0629 18:24:27.956014  156452 logs.go:274] 0 containers: []
	W0629 18:24:27.956020  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:24:27.956026  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:24:27.956067  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:24:27.994060  156452 cri.go:87] found id: ""
	I0629 18:24:27.994083  156452 logs.go:274] 0 containers: []
	W0629 18:24:27.994090  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:24:27.994099  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:24:27.994148  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:24:28.026419  156452 cri.go:87] found id: ""
	I0629 18:24:28.026443  156452 logs.go:274] 0 containers: []
	W0629 18:24:28.026451  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:24:28.026459  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:24:28.026513  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:24:28.049435  156452 cri.go:87] found id: ""
	I0629 18:24:28.049462  156452 logs.go:274] 0 containers: []
	W0629 18:24:28.049470  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:24:28.049479  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:24:28.049521  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:24:28.079380  156452 cri.go:87] found id: ""
	I0629 18:24:28.079411  156452 logs.go:274] 0 containers: []
	W0629 18:24:28.079419  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:24:28.079431  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:24:28.079445  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:24:28.099364  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:24:28.099399  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:24:28.159771  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:24:28.159793  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:24:28.159804  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:24:28.222037  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:24:28.222095  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:24:28.253691  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:24:28.253723  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:24:28.311643  156452 logs.go:138] Found kubelet problem: Jun 29 18:24:27 kubernetes-upgrade-20220629182055-10091 kubelet[4088]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:24:28.379209  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:24:28.379239  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:24:28.379352  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:24:28.379368  156452 out.go:239]   Jun 29 18:24:27 kubernetes-upgrade-20220629182055-10091 kubelet[4088]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:24:27 kubernetes-upgrade-20220629182055-10091 kubelet[4088]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:24:28.379373  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:24:28.379382  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:24:38.380506  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:24:38.832232  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:24:38.832286  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:24:38.860507  156452 cri.go:87] found id: ""
	I0629 18:24:38.860532  156452 logs.go:274] 0 containers: []
	W0629 18:24:38.860540  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:24:38.860549  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:24:38.860603  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:24:38.886711  156452 cri.go:87] found id: ""
	I0629 18:24:38.886737  156452 logs.go:274] 0 containers: []
	W0629 18:24:38.886745  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:24:38.886753  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:24:38.886805  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:24:38.909810  156452 cri.go:87] found id: ""
	I0629 18:24:38.909834  156452 logs.go:274] 0 containers: []
	W0629 18:24:38.909847  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:24:38.909855  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:24:38.909912  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:24:38.932838  156452 cri.go:87] found id: ""
	I0629 18:24:38.932892  156452 logs.go:274] 0 containers: []
	W0629 18:24:38.932901  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:24:38.932909  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:24:38.932964  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:24:38.955211  156452 cri.go:87] found id: ""
	I0629 18:24:38.955238  156452 logs.go:274] 0 containers: []
	W0629 18:24:38.955246  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:24:38.955255  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:24:38.955301  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:24:38.976732  156452 cri.go:87] found id: ""
	I0629 18:24:38.976758  156452 logs.go:274] 0 containers: []
	W0629 18:24:38.976772  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:24:38.976781  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:24:38.976838  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:24:38.998924  156452 cri.go:87] found id: ""
	I0629 18:24:38.998948  156452 logs.go:274] 0 containers: []
	W0629 18:24:38.998956  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:24:38.998965  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:24:38.999012  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:24:39.024253  156452 cri.go:87] found id: ""
	I0629 18:24:39.024280  156452 logs.go:274] 0 containers: []
	W0629 18:24:39.024288  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:24:39.024300  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:24:39.024316  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:24:39.082423  156452 logs.go:138] Found kubelet problem: Jun 29 18:24:38 kubernetes-upgrade-20220629182055-10091 kubelet[4378]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:24:39.128167  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:24:39.128208  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:24:39.143212  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:24:39.143246  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:24:39.192671  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:24:39.192696  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:24:39.192709  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:24:39.230107  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:24:39.230147  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:24:39.257850  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:24:39.257875  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:24:39.257971  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:24:39.257982  156452 out.go:239]   Jun 29 18:24:38 kubernetes-upgrade-20220629182055-10091 kubelet[4378]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:24:38 kubernetes-upgrade-20220629182055-10091 kubelet[4378]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:24:39.257989  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:24:39.257994  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:24:49.258262  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:24:49.332065  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:24:49.332126  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:24:49.356924  156452 cri.go:87] found id: ""
	I0629 18:24:49.356949  156452 logs.go:274] 0 containers: []
	W0629 18:24:49.356957  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:24:49.356965  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:24:49.357016  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:24:49.384002  156452 cri.go:87] found id: ""
	I0629 18:24:49.384033  156452 logs.go:274] 0 containers: []
	W0629 18:24:49.384042  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:24:49.384050  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:24:49.384096  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:24:49.407535  156452 cri.go:87] found id: ""
	I0629 18:24:49.407563  156452 logs.go:274] 0 containers: []
	W0629 18:24:49.407572  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:24:49.407579  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:24:49.407633  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:24:49.436578  156452 cri.go:87] found id: ""
	I0629 18:24:49.436623  156452 logs.go:274] 0 containers: []
	W0629 18:24:49.436632  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:24:49.436640  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:24:49.436693  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:24:49.462171  156452 cri.go:87] found id: ""
	I0629 18:24:49.462201  156452 logs.go:274] 0 containers: []
	W0629 18:24:49.462210  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:24:49.462219  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:24:49.462281  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:24:49.488797  156452 cri.go:87] found id: ""
	I0629 18:24:49.488819  156452 logs.go:274] 0 containers: []
	W0629 18:24:49.488825  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:24:49.488831  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:24:49.488908  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:24:49.514920  156452 cri.go:87] found id: ""
	I0629 18:24:49.514941  156452 logs.go:274] 0 containers: []
	W0629 18:24:49.514947  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:24:49.514953  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:24:49.515009  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:24:49.539995  156452 cri.go:87] found id: ""
	I0629 18:24:49.540020  156452 logs.go:274] 0 containers: []
	W0629 18:24:49.540029  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:24:49.540039  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:24:49.540052  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:24:49.596798  156452 logs.go:138] Found kubelet problem: Jun 29 18:24:48 kubernetes-upgrade-20220629182055-10091 kubelet[4674]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:24:49.644010  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:24:49.644041  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:24:49.658234  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:24:49.658260  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:24:49.705604  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:24:49.705627  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:24:49.705637  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:24:49.741549  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:24:49.741578  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:24:49.769061  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:24:49.769083  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:24:49.769192  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:24:49.769209  156452 out.go:239]   Jun 29 18:24:48 kubernetes-upgrade-20220629182055-10091 kubelet[4674]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:24:48 kubernetes-upgrade-20220629182055-10091 kubelet[4674]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:24:49.769219  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:24:49.769233  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:24:59.770299  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:24:59.831852  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:24:59.831918  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:24:59.854485  156452 cri.go:87] found id: ""
	I0629 18:24:59.854510  156452 logs.go:274] 0 containers: []
	W0629 18:24:59.854517  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:24:59.854524  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:24:59.854569  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:24:59.876116  156452 cri.go:87] found id: ""
	I0629 18:24:59.876148  156452 logs.go:274] 0 containers: []
	W0629 18:24:59.876156  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:24:59.876163  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:24:59.876210  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:24:59.898683  156452 cri.go:87] found id: ""
	I0629 18:24:59.898703  156452 logs.go:274] 0 containers: []
	W0629 18:24:59.898709  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:24:59.898715  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:24:59.898763  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:24:59.920139  156452 cri.go:87] found id: ""
	I0629 18:24:59.920171  156452 logs.go:274] 0 containers: []
	W0629 18:24:59.920180  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:24:59.920189  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:24:59.920251  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:24:59.942990  156452 cri.go:87] found id: ""
	I0629 18:24:59.943011  156452 logs.go:274] 0 containers: []
	W0629 18:24:59.943017  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:24:59.943024  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:24:59.943076  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:24:59.965315  156452 cri.go:87] found id: ""
	I0629 18:24:59.965341  156452 logs.go:274] 0 containers: []
	W0629 18:24:59.965348  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:24:59.965354  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:24:59.965396  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:24:59.987533  156452 cri.go:87] found id: ""
	I0629 18:24:59.987560  156452 logs.go:274] 0 containers: []
	W0629 18:24:59.987568  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:24:59.987576  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:24:59.987629  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:25:00.014837  156452 cri.go:87] found id: ""
	I0629 18:25:00.014864  156452 logs.go:274] 0 containers: []
	W0629 18:25:00.014872  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:25:00.014882  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:25:00.014904  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:25:00.079914  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:25:00.079936  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:25:00.079947  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:25:00.116353  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:25:00.116384  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:25:00.141043  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:25:00.141073  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:25:00.185687  156452 logs.go:138] Found kubelet problem: Jun 29 18:25:00 kubernetes-upgrade-20220629182055-10091 kubelet[5088]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:25:00.232191  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:25:00.232221  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:25:00.246520  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:25:00.246544  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:25:00.246652  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:25:00.246666  156452 out.go:239]   Jun 29 18:25:00 kubernetes-upgrade-20220629182055-10091 kubelet[5088]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:25:00 kubernetes-upgrade-20220629182055-10091 kubelet[5088]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:25:00.246674  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:25:00.246683  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:25:10.247681  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:25:10.332481  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:25:10.332541  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:25:10.355848  156452 cri.go:87] found id: ""
	I0629 18:25:10.355876  156452 logs.go:274] 0 containers: []
	W0629 18:25:10.355884  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:25:10.355893  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:25:10.355949  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:25:10.378177  156452 cri.go:87] found id: ""
	I0629 18:25:10.378196  156452 logs.go:274] 0 containers: []
	W0629 18:25:10.378202  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:25:10.378208  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:25:10.378250  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:25:10.401977  156452 cri.go:87] found id: ""
	I0629 18:25:10.402004  156452 logs.go:274] 0 containers: []
	W0629 18:25:10.402011  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:25:10.402017  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:25:10.402057  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:25:10.424284  156452 cri.go:87] found id: ""
	I0629 18:25:10.424309  156452 logs.go:274] 0 containers: []
	W0629 18:25:10.424314  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:25:10.424320  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:25:10.424373  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:25:10.447235  156452 cri.go:87] found id: ""
	I0629 18:25:10.447263  156452 logs.go:274] 0 containers: []
	W0629 18:25:10.447273  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:25:10.447282  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:25:10.447326  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:25:10.469444  156452 cri.go:87] found id: ""
	I0629 18:25:10.469466  156452 logs.go:274] 0 containers: []
	W0629 18:25:10.469472  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:25:10.469478  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:25:10.469530  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:25:10.491511  156452 cri.go:87] found id: ""
	I0629 18:25:10.491532  156452 logs.go:274] 0 containers: []
	W0629 18:25:10.491538  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:25:10.491544  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:25:10.491591  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:25:10.518916  156452 cri.go:87] found id: ""
	I0629 18:25:10.518944  156452 logs.go:274] 0 containers: []
	W0629 18:25:10.518953  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:25:10.518965  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:25:10.518981  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:25:10.534552  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:25:10.534592  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:25:10.598813  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:25:10.598833  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:25:10.598844  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:25:10.660260  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:25:10.660300  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:25:10.688468  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:25:10.688497  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:25:10.745654  156452 logs.go:138] Found kubelet problem: Jun 29 18:25:10 kubernetes-upgrade-20220629182055-10091 kubelet[5390]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:25:10.798105  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:25:10.798133  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:25:10.798239  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:25:10.798252  156452 out.go:239]   Jun 29 18:25:10 kubernetes-upgrade-20220629182055-10091 kubelet[5390]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:25:10 kubernetes-upgrade-20220629182055-10091 kubelet[5390]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:25:10.798256  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:25:10.798262  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:25:20.800006  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:25:20.832244  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:25:20.832303  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:25:20.855312  156452 cri.go:87] found id: ""
	I0629 18:25:20.855333  156452 logs.go:274] 0 containers: []
	W0629 18:25:20.855339  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:25:20.855347  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:25:20.855403  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:25:20.877888  156452 cri.go:87] found id: ""
	I0629 18:25:20.877913  156452 logs.go:274] 0 containers: []
	W0629 18:25:20.877921  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:25:20.877928  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:25:20.877980  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:25:20.903166  156452 cri.go:87] found id: ""
	I0629 18:25:20.903196  156452 logs.go:274] 0 containers: []
	W0629 18:25:20.903205  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:25:20.903212  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:25:20.903269  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:25:20.928164  156452 cri.go:87] found id: ""
	I0629 18:25:20.928195  156452 logs.go:274] 0 containers: []
	W0629 18:25:20.928205  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:25:20.928214  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:25:20.928264  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:25:20.969493  156452 cri.go:87] found id: ""
	I0629 18:25:20.969532  156452 logs.go:274] 0 containers: []
	W0629 18:25:20.969542  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:25:20.969551  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:25:20.969614  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:25:20.994236  156452 cri.go:87] found id: ""
	I0629 18:25:20.994266  156452 logs.go:274] 0 containers: []
	W0629 18:25:20.994273  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:25:20.994281  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:25:20.994331  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:25:21.020010  156452 cri.go:87] found id: ""
	I0629 18:25:21.020039  156452 logs.go:274] 0 containers: []
	W0629 18:25:21.020049  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:25:21.020058  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:25:21.020101  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:25:21.044793  156452 cri.go:87] found id: ""
	I0629 18:25:21.044819  156452 logs.go:274] 0 containers: []
	W0629 18:25:21.044827  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:25:21.044838  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:25:21.044849  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:25:21.119755  156452 logs.go:138] Found kubelet problem: Jun 29 18:25:21 kubernetes-upgrade-20220629182055-10091 kubelet[5667]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:25:21.139279  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:25:21.139339  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:25:21.154038  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:25:21.154066  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:25:21.202030  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:25:21.202052  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:25:21.202064  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:25:21.239664  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:25:21.239703  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:25:21.265889  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:25:21.265916  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:25:21.266025  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:25:21.266040  156452 out.go:239]   Jun 29 18:25:21 kubernetes-upgrade-20220629182055-10091 kubelet[5667]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:25:21 kubernetes-upgrade-20220629182055-10091 kubelet[5667]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:25:21.266049  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:25:21.266062  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:25:31.267055  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:25:31.331752  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:25:31.331820  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:25:31.359194  156452 cri.go:87] found id: ""
	I0629 18:25:31.359222  156452 logs.go:274] 0 containers: []
	W0629 18:25:31.359231  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:25:31.359239  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:25:31.359294  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:25:31.384881  156452 cri.go:87] found id: ""
	I0629 18:25:31.384908  156452 logs.go:274] 0 containers: []
	W0629 18:25:31.384915  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:25:31.384924  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:25:31.384974  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:25:31.409434  156452 cri.go:87] found id: ""
	I0629 18:25:31.409463  156452 logs.go:274] 0 containers: []
	W0629 18:25:31.409472  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:25:31.409481  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:25:31.409533  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:25:31.486424  156452 cri.go:87] found id: ""
	I0629 18:25:31.486454  156452 logs.go:274] 0 containers: []
	W0629 18:25:31.486463  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:25:31.486471  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:25:31.486520  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:25:31.511584  156452 cri.go:87] found id: ""
	I0629 18:25:31.511613  156452 logs.go:274] 0 containers: []
	W0629 18:25:31.511621  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:25:31.511629  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:25:31.511674  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:25:31.538835  156452 cri.go:87] found id: ""
	I0629 18:25:31.538859  156452 logs.go:274] 0 containers: []
	W0629 18:25:31.538867  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:25:31.538876  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:25:31.538928  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:25:31.574681  156452 cri.go:87] found id: ""
	I0629 18:25:31.574700  156452 logs.go:274] 0 containers: []
	W0629 18:25:31.574705  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:25:31.574712  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:25:31.574757  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:25:31.602611  156452 cri.go:87] found id: ""
	I0629 18:25:31.602638  156452 logs.go:274] 0 containers: []
	W0629 18:25:31.602647  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:25:31.602658  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:25:31.602674  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:25:31.649923  156452 logs.go:138] Found kubelet problem: Jun 29 18:25:31 kubernetes-upgrade-20220629182055-10091 kubelet[5942]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:25:31.696505  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:25:31.696534  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:25:31.712290  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:25:31.712320  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:25:31.761581  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:25:31.761610  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:25:31.761623  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:25:31.799785  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:25:31.799812  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:25:31.826447  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:25:31.826479  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:25:31.827110  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:25:31.827137  156452 out.go:239]   Jun 29 18:25:31 kubernetes-upgrade-20220629182055-10091 kubelet[5942]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:25:31 kubernetes-upgrade-20220629182055-10091 kubelet[5942]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:25:31.827147  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:25:31.827157  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:25:41.828435  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:25:42.332174  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:25:42.332246  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:25:42.356575  156452 cri.go:87] found id: ""
	I0629 18:25:42.356599  156452 logs.go:274] 0 containers: []
	W0629 18:25:42.356606  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:25:42.356614  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:25:42.356677  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:25:42.378853  156452 cri.go:87] found id: ""
	I0629 18:25:42.378872  156452 logs.go:274] 0 containers: []
	W0629 18:25:42.378880  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:25:42.378888  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:25:42.378939  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:25:42.400754  156452 cri.go:87] found id: ""
	I0629 18:25:42.400778  156452 logs.go:274] 0 containers: []
	W0629 18:25:42.400784  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:25:42.400789  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:25:42.400829  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:25:42.423498  156452 cri.go:87] found id: ""
	I0629 18:25:42.423528  156452 logs.go:274] 0 containers: []
	W0629 18:25:42.423536  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:25:42.423542  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:25:42.423593  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:25:42.450289  156452 cri.go:87] found id: ""
	I0629 18:25:42.450311  156452 logs.go:274] 0 containers: []
	W0629 18:25:42.450317  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:25:42.450323  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:25:42.450382  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:25:42.473423  156452 cri.go:87] found id: ""
	I0629 18:25:42.473447  156452 logs.go:274] 0 containers: []
	W0629 18:25:42.473453  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:25:42.473462  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:25:42.473516  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:25:42.494758  156452 cri.go:87] found id: ""
	I0629 18:25:42.494780  156452 logs.go:274] 0 containers: []
	W0629 18:25:42.494788  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:25:42.494797  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:25:42.494845  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:25:42.516434  156452 cri.go:87] found id: ""
	I0629 18:25:42.516458  156452 logs.go:274] 0 containers: []
	W0629 18:25:42.516465  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:25:42.516474  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:25:42.516488  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:25:42.559656  156452 logs.go:138] Found kubelet problem: Jun 29 18:25:42 kubernetes-upgrade-20220629182055-10091 kubelet[6185]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:25:42.604210  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:25:42.604239  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:25:42.618393  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:25:42.618421  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:25:42.665450  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:25:42.665476  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:25:42.665487  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:25:42.701741  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:25:42.701769  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:25:42.726493  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:25:42.726516  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:25:42.726611  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:25:42.726622  156452 out.go:239]   Jun 29 18:25:42 kubernetes-upgrade-20220629182055-10091 kubelet[6185]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:25:42 kubernetes-upgrade-20220629182055-10091 kubelet[6185]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:25:42.726626  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:25:42.726631  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:25:52.727460  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:25:52.832453  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:25:52.832519  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:25:52.855351  156452 cri.go:87] found id: ""
	I0629 18:25:52.855374  156452 logs.go:274] 0 containers: []
	W0629 18:25:52.855384  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:25:52.855390  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:25:52.855441  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:25:52.882064  156452 cri.go:87] found id: ""
	I0629 18:25:52.882096  156452 logs.go:274] 0 containers: []
	W0629 18:25:52.882106  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:25:52.882113  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:25:52.882167  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:25:52.907617  156452 cri.go:87] found id: ""
	I0629 18:25:52.907642  156452 logs.go:274] 0 containers: []
	W0629 18:25:52.907652  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:25:52.907659  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:25:52.907708  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:25:52.930354  156452 cri.go:87] found id: ""
	I0629 18:25:52.930374  156452 logs.go:274] 0 containers: []
	W0629 18:25:52.930380  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:25:52.930386  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:25:52.930429  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:25:52.953510  156452 cri.go:87] found id: ""
	I0629 18:25:52.953537  156452 logs.go:274] 0 containers: []
	W0629 18:25:52.953545  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:25:52.953553  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:25:52.953607  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:25:52.977338  156452 cri.go:87] found id: ""
	I0629 18:25:52.977368  156452 logs.go:274] 0 containers: []
	W0629 18:25:52.977376  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:25:52.977386  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:25:52.977437  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:25:53.003396  156452 cri.go:87] found id: ""
	I0629 18:25:53.003417  156452 logs.go:274] 0 containers: []
	W0629 18:25:53.003423  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:25:53.003430  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:25:53.003471  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:25:53.025806  156452 cri.go:87] found id: ""
	I0629 18:25:53.025827  156452 logs.go:274] 0 containers: []
	W0629 18:25:53.025833  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:25:53.025847  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:25:53.025858  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:25:53.039957  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:25:53.039979  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:25:53.087400  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:25:53.087424  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:25:53.087438  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:25:53.124136  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:25:53.124169  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:25:53.150276  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:25:53.150306  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:25:53.194493  156452 logs.go:138] Found kubelet problem: Jun 29 18:25:52 kubernetes-upgrade-20220629182055-10091 kubelet[6480]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:25:53.239133  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:25:53.239159  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:25:53.239273  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:25:53.239287  156452 out.go:239]   Jun 29 18:25:52 kubernetes-upgrade-20220629182055-10091 kubelet[6480]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:25:52 kubernetes-upgrade-20220629182055-10091 kubelet[6480]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:25:53.239293  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:25:53.239301  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:26:03.241041  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:26:03.331944  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:26:03.332023  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:26:03.355944  156452 cri.go:87] found id: ""
	I0629 18:26:03.355967  156452 logs.go:274] 0 containers: []
	W0629 18:26:03.355974  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:26:03.355983  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:26:03.356033  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:26:03.377748  156452 cri.go:87] found id: ""
	I0629 18:26:03.377776  156452 logs.go:274] 0 containers: []
	W0629 18:26:03.377782  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:26:03.377788  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:26:03.377830  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:26:03.399032  156452 cri.go:87] found id: ""
	I0629 18:26:03.399054  156452 logs.go:274] 0 containers: []
	W0629 18:26:03.399061  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:26:03.399069  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:26:03.399123  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:26:03.420823  156452 cri.go:87] found id: ""
	I0629 18:26:03.420874  156452 logs.go:274] 0 containers: []
	W0629 18:26:03.420891  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:26:03.420900  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:26:03.420950  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:26:03.443732  156452 cri.go:87] found id: ""
	I0629 18:26:03.443755  156452 logs.go:274] 0 containers: []
	W0629 18:26:03.443764  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:26:03.443770  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:26:03.443817  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:26:03.465497  156452 cri.go:87] found id: ""
	I0629 18:26:03.465519  156452 logs.go:274] 0 containers: []
	W0629 18:26:03.465528  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:26:03.465537  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:26:03.465589  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:26:03.489445  156452 cri.go:87] found id: ""
	I0629 18:26:03.489466  156452 logs.go:274] 0 containers: []
	W0629 18:26:03.489472  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:26:03.489478  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:26:03.489525  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:26:03.511938  156452 cri.go:87] found id: ""
	I0629 18:26:03.511963  156452 logs.go:274] 0 containers: []
	W0629 18:26:03.511970  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:26:03.511981  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:26:03.511995  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:26:03.559966  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:26:03.559992  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:26:03.560009  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:26:03.595029  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:26:03.595056  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:26:03.619728  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:26:03.619759  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:26:03.663099  156452 logs.go:138] Found kubelet problem: Jun 29 18:26:03 kubernetes-upgrade-20220629182055-10091 kubelet[6777]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:26:03.708775  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:26:03.708805  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:26:03.723026  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:26:03.723047  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:26:03.723142  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:26:03.723153  156452 out.go:239]   Jun 29 18:26:03 kubernetes-upgrade-20220629182055-10091 kubelet[6777]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:26:03 kubernetes-upgrade-20220629182055-10091 kubelet[6777]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:26:03.723157  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:26:03.723161  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:26:13.723983  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:26:13.831880  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:26:13.831942  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:26:13.854393  156452 cri.go:87] found id: ""
	I0629 18:26:13.854420  156452 logs.go:274] 0 containers: []
	W0629 18:26:13.854426  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:26:13.854433  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:26:13.854480  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:26:13.875805  156452 cri.go:87] found id: ""
	I0629 18:26:13.875826  156452 logs.go:274] 0 containers: []
	W0629 18:26:13.875832  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:26:13.875837  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:26:13.875890  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:26:13.898566  156452 cri.go:87] found id: ""
	I0629 18:26:13.898595  156452 logs.go:274] 0 containers: []
	W0629 18:26:13.898604  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:26:13.898612  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:26:13.898653  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:26:13.920574  156452 cri.go:87] found id: ""
	I0629 18:26:13.920600  156452 logs.go:274] 0 containers: []
	W0629 18:26:13.920611  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:26:13.920620  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:26:13.920670  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:26:13.941652  156452 cri.go:87] found id: ""
	I0629 18:26:13.941677  156452 logs.go:274] 0 containers: []
	W0629 18:26:13.941687  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:26:13.941695  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:26:13.941748  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:26:13.964000  156452 cri.go:87] found id: ""
	I0629 18:26:13.964026  156452 logs.go:274] 0 containers: []
	W0629 18:26:13.964033  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:26:13.964039  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:26:13.964091  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:26:13.985416  156452 cri.go:87] found id: ""
	I0629 18:26:13.985437  156452 logs.go:274] 0 containers: []
	W0629 18:26:13.985442  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:26:13.985448  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:26:13.985489  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:26:14.008711  156452 cri.go:87] found id: ""
	I0629 18:26:14.008739  156452 logs.go:274] 0 containers: []
	W0629 18:26:14.008748  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:26:14.008760  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:26:14.008773  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:26:14.057792  156452 logs.go:138] Found kubelet problem: Jun 29 18:26:13 kubernetes-upgrade-20220629182055-10091 kubelet[7073]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:26:14.103232  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:26:14.103264  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 18:26:14.117568  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:26:14.117594  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:26:14.163837  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:26:14.163862  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:26:14.163872  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:26:14.201560  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:26:14.201590  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:26:14.227800  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:26:14.227826  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0629 18:26:14.227948  156452 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0629 18:26:14.227962  156452 out.go:239]   Jun 29 18:26:13 kubernetes-upgrade-20220629182055-10091 kubelet[7073]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jun 29 18:26:13 kubernetes-upgrade-20220629182055-10091 kubelet[7073]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:26:14.227970  156452 out.go:309] Setting ErrFile to fd 2...
	I0629 18:26:14.227978  156452 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:26:24.228965  156452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:26:24.237829  156452 kubeadm.go:630] restartCluster took 4m2.575272885s
	W0629 18:26:24.237985  156452 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0629 18:26:24.238022  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0629 18:26:24.905352  156452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 18:26:24.917269  156452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 18:26:24.926488  156452 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 18:26:24.926534  156452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 18:26:24.934336  156452 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 18:26:24.934381  156452 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 18:26:25.204216  156452 out.go:204]   - Generating certificates and keys ...
	I0629 18:26:25.946037  156452 out.go:204]   - Booting up control plane ...
	W0629 18:28:20.959452  156452 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.13.0-1033-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0629 18:26:24.969812    7613 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1033-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.13.0-1033-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0629 18:26:24.969812    7613 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1033-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0629 18:28:20.959513  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0629 18:28:21.618478  156452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 18:28:21.628578  156452 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 18:28:21.628635  156452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 18:28:21.635774  156452 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 18:28:21.635822  156452 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 18:28:21.872400  156452 out.go:204]   - Generating certificates and keys ...
	I0629 18:28:22.517313  156452 out.go:204]   - Booting up control plane ...
	I0629 18:30:17.533443  156452 kubeadm.go:397] StartCluster complete in 7m55.902652472s
	I0629 18:30:17.533481  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:30:17.533520  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:30:17.557148  156452 cri.go:87] found id: ""
	I0629 18:30:17.557171  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.557177  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:30:17.557182  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:30:17.557227  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:30:17.580224  156452 cri.go:87] found id: ""
	I0629 18:30:17.580252  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.580260  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:30:17.580266  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:30:17.580328  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:30:17.603094  156452 cri.go:87] found id: ""
	I0629 18:30:17.603123  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.603132  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:30:17.603138  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:30:17.603179  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:30:17.625492  156452 cri.go:87] found id: ""
	I0629 18:30:17.625512  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.625519  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:30:17.625524  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:30:17.625563  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:30:17.647042  156452 cri.go:87] found id: ""
	I0629 18:30:17.647063  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.647068  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:30:17.647074  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:30:17.647113  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:30:17.668741  156452 cri.go:87] found id: ""
	I0629 18:30:17.668763  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.668770  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:30:17.668775  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:30:17.668822  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:30:17.694345  156452 cri.go:87] found id: ""
	I0629 18:30:17.694371  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.694383  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:30:17.694391  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:30:17.694447  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:30:17.718181  156452 cri.go:87] found id: ""
	I0629 18:30:17.718202  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.718208  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:30:17.718218  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:30:17.718235  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:30:17.765334  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:30:17.765355  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:30:17.765370  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:30:17.820076  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:30:17.820117  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:30:17.851247  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:30:17.851280  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:30:17.918926  156452 logs.go:138] Found kubelet problem: Jun 29 18:30:17 kubernetes-upgrade-20220629182055-10091 kubelet[11610]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:30:17.965386  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:30:17.965422  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0629 18:30:17.982581  156452 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.13.0-1033-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0629 18:28:21.666839    9720 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1033-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0629 18:30:17.982627  156452 out.go:239] * 
	* 
	W0629 18:30:17.982853  156452 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.13.0-1033-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0629 18:28:21.666839    9720 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1033-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.13.0-1033-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0629 18:28:21.666839    9720 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1033-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 18:30:17.982887  156452 out.go:239] * 
	* 
	W0629 18:30:17.983620  156452 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 18:30:17.986243  156452 out.go:177] X Problems detected in kubelet:
	I0629 18:30:17.987871  156452 out.go:177]   Jun 29 18:30:17 kubernetes-upgrade-20220629182055-10091 kubelet[11610]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:30:17.991405  156452 out.go:177] 
	W0629 18:30:17.993287  156452 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.13.0-1033-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0629 18:28:21.666839    9720 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1033-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.13.0-1033-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0629 18:28:21.666839    9720 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1033-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 18:30:17.993404  156452 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0629 18:30:17.993467  156452 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0629 18:30:17.995197  156452 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220629182055-10091 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220629182055-10091 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220629182055-10091 version --output=json: exit status 1 (70.502509ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "24",
	    "gitVersion": "v1.24.2",
	    "gitCommit": "f66044f4361b9f1f96f0053dd46cb7dce5e990a8",
	    "gitTreeState": "clean",
	    "buildDate": "2022-06-15T14:22:29Z",
	    "goVersion": "go1.18.3",
	    "compiler": "gc",
	    "platform": "linux/amd64"
	  },
	  "kustomizeVersion": "v4.5.4"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.67.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:482: *** TestKubernetesUpgrade FAILED at 2022-06-29 18:30:18.224826943 +0000 UTC m=+2240.921822983
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220629182055-10091
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220629182055-10091:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5337db229f3ddaf3b0c4665a4040c96ce24954abe282c6d5728f91a55570953c",
	        "Created": "2022-06-29T18:21:03.624174314Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 157229,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:21:43.25362957Z",
	            "FinishedAt": "2022-06-29T18:21:41.444430392Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/5337db229f3ddaf3b0c4665a4040c96ce24954abe282c6d5728f91a55570953c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5337db229f3ddaf3b0c4665a4040c96ce24954abe282c6d5728f91a55570953c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5337db229f3ddaf3b0c4665a4040c96ce24954abe282c6d5728f91a55570953c/hosts",
	        "LogPath": "/var/lib/docker/containers/5337db229f3ddaf3b0c4665a4040c96ce24954abe282c6d5728f91a55570953c/5337db229f3ddaf3b0c4665a4040c96ce24954abe282c6d5728f91a55570953c-json.log",
	        "Name": "/kubernetes-upgrade-20220629182055-10091",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220629182055-10091:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220629182055-10091",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f809ee073bea80888340146356299363c0179f7b8bf47e4e9423a7fcb7a5e5b-init/diff:/var/lib/docker/overlay2/d3385aa9c39f527be309c21428823d7d243d9156f6da0fec32a2c45144ff4ce2/diff:/var/lib/docker/overlay2/2f3e290649c2b746907e4f8a0276fa5ee413e8e24f45f826ec445b93da848cbd/diff:/var/lib/docker/overlay2/41ca8b5fdfa604f8656247a7e3f9125ffef80bb4e6c20e193244f8583cf080c9/diff:/var/lib/docker/overlay2/43485de9a68a2bef5ae3ebb719a1c5c4171e784bdc77dcc4519efd8846e8991f/diff:/var/lib/docker/overlay2/ebc5ee8fbedda16385fc4fae5e53748bde6942826695bd3a6485d408452cd320/diff:/var/lib/docker/overlay2/39479dde2499df18b0026e0a617e624dc981b87aefe547ec21af4af63d51ed56/diff:/var/lib/docker/overlay2/0d596598e5068140baa7f8db3c3f7bd8c230bb98505ab34919d6f851b3c2604a/diff:/var/lib/docker/overlay2/237d9f094f7865337f3b6f2b4e1b0bb3bd586d69b2ae9b1ccfb897c740879026/diff:/var/lib/docker/overlay2/99414d4e487c9d623ab87fbe3184dae45c1ba6de459e3836f42d8b42977742e3/diff:/var/lib/docker/overlay2/f54763
832a72995eba9a1faec0fae8b4ce9372a3532d4bf8e7574d9e387d13d1/diff:/var/lib/docker/overlay2/bf23e0e2a16df79cc5414d1e93f3daec6ea744510405c89275eb1f790246a657/diff:/var/lib/docker/overlay2/5c8f2925a4af943a42b2292f3bea31ad3608dd653cd0ca835de350dd12a853a2/diff:/var/lib/docker/overlay2/ad77ffaf508aa581ab0fbfa65bb622d36cd47595360c69ea418349f3255ec1d3/diff:/var/lib/docker/overlay2/8a31844861fb59f5c531538e410a6b9499cee62b618333bc0a01d44e90d9e285/diff:/var/lib/docker/overlay2/cfe46cb30abe53101d69cfda2c706cf60371c66f4cbcd8ec61b9c967a11757cc/diff:/var/lib/docker/overlay2/f217e5ddd8fe277d3d36937bee06089cd5e079250600e24c07181173232fa52e/diff:/var/lib/docker/overlay2/b3f423c9dccbd61682d8d2ebf463ecd3d036c63b9c45c68de51687772d552d69/diff:/var/lib/docker/overlay2/0c713216a0f476551a5ade44c4502d4a8824b32dc6d5bcf1c683e54e55fa7af2/diff:/var/lib/docker/overlay2/8e8b6d81bcb61412ad338380fe8eedaff8682cdffc6348aa2ab6ff5a0dbd7613/diff:/var/lib/docker/overlay2/4857370de21532b71afcb6f39449dff8fa35358a3ff97c8ee25e16a14cff1377/diff:/var/lib/d
ocker/overlay2/8d2394373c7cdd8d498f76ef6459ebf6babc023854a087d2a8c7788be8f15a67/diff:/var/lib/docker/overlay2/671d1ef4d0b0e5510abdb420c926acb9c22a90973b7bc88473f83a154c539614/diff:/var/lib/docker/overlay2/dc65089c4b3c41c8b2260bea8373486cbd8db6d8e9077b9291db318144009682/diff:/var/lib/docker/overlay2/6e23c435dd8e0fec389f260da747f68e661422beaa8270392b0a775750298cba/diff:/var/lib/docker/overlay2/fb711cc431502c9f34646bc21cbc192a19d74103b13148b31be18ff3acb0c651/diff:/var/lib/docker/overlay2/7bd64135baa2b733a8efc6357029d1b68a9e6eae5ee505755c150cc8d18b2267/diff:/var/lib/docker/overlay2/3128dfeacabc43fc1ce09744ab67cc5108916066e32440f585f069052d0b3705/diff:/var/lib/docker/overlay2/76ce0122c8d57aa32d3df6fd309f805993fe917e579fb75cb7f5a83388c59c62/diff:/var/lib/docker/overlay2/bf32cc04520f5b4194004e2d31f835289a167bfe8e107c1184c419808ab95ec6/diff:/var/lib/docker/overlay2/a2ccc2d9df1e14f92aebfb070a4336c436224963b4ac2021b4f712d40197a768/diff:/var/lib/docker/overlay2/b2bfe23c6ac0445ba734c805dae3ea2c7c8c67447c25a53037ac10f6efc
46895/diff:/var/lib/docker/overlay2/47395273a738a8ef84fd0fcc9d4c29eb402d0d663cd830951ed3fc4c8e8a4465/diff:/var/lib/docker/overlay2/599c04da8f9a04fc6668b1418b3c1d7a736154b052635dca60cc065957831a36/diff:/var/lib/docker/overlay2/ed39aac62a9e47ac1ce0ff5d7145185d2a1afbe2f85ac6c37cf297160f48343f/diff:/var/lib/docker/overlay2/ddacbb067a39fdf9c27a594166457bbab0ed581308d988257b9d4dfaa6e57999/diff:/var/lib/docker/overlay2/c84d36f4cbe73dec9935a33b5b87ad855045231ada9e22ac10fa3dd43a0f1c0d/diff:/var/lib/docker/overlay2/c13fc639786f03ef458e7e9058bc31d8ad566b9e40c620d29b317e353e656c1d/diff:/var/lib/docker/overlay2/435f96dbcd095a80eed563d7c8a11657313cab4c5ea2d2aac03e65eeb8234019/diff:/var/lib/docker/overlay2/f88936359dc664c06e01d163c0a5a7e3ca8f842c9d2fc3b4f3874f73ef1bbab4/diff:/var/lib/docker/overlay2/81f7f543275073933bc58a5219c16620a9de01207d9278e73a2c14011c4fd4eb/diff:/var/lib/docker/overlay2/e919eb7a7febdbe7551dcdd4f619bf6033d7a9f2bf4689e021b7036b7e641dcb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f809ee073bea80888340146356299363c0179f7b8bf47e4e9423a7fcb7a5e5b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f809ee073bea80888340146356299363c0179f7b8bf47e4e9423a7fcb7a5e5b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f809ee073bea80888340146356299363c0179f7b8bf47e4e9423a7fcb7a5e5b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220629182055-10091",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220629182055-10091/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220629182055-10091",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220629182055-10091",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220629182055-10091",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b248c828754e4feed0839487c70b9e7482af662756ab2b3dca8dc7bab79af69",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49337"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49336"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49333"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49335"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49334"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0b248c828754",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220629182055-10091": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5337db229f3d",
	                        "kubernetes-upgrade-20220629182055-10091"
	                    ],
	                    "NetworkID": "54ba0cd44c97ebe1c29405db05eb6f6147a853f0133c988c4d514cd0d30e7fcd",
	                    "EndpointID": "17226cb4eb68dfa7ec8dd3b20a577978aead5217e7ec903a01ca0be312ef79c7",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220629182055-10091 -n kubernetes-upgrade-20220629182055-10091
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220629182055-10091 -n kubernetes-upgrade-20220629182055-10091: exit status 2 (414.014866ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220629182055-10091 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:23 UTC | 29 Jun 22 18:23 UTC |
	|         | force-systemd-flag-20220629182310-10091           |          |         |         |                     |                     |
	|         | --memory=2048 --force-systemd                     |          |         |         |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker            |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:23 UTC | 29 Jun 22 18:23 UTC |
	|         | running-upgrade-20220629182131-10091              |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:23 UTC | 29 Jun 22 18:23 UTC |
	|         | cert-options-20220629182317-10091                 |          |         |         |                     |                     |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                         |          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                     |          |         |         |                     |                     |
	|         | --apiserver-names=localhost                       |          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                  |          |         |         |                     |                     |
	|         | --apiserver-port=8555                             |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| ssh     | force-systemd-flag-20220629182310-10091           | minikube | jenkins | v1.26.0 | 29 Jun 22 18:23 UTC | 29 Jun 22 18:23 UTC |
	|         | ssh cat /etc/containerd/config.toml               |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:23 UTC | 29 Jun 22 18:23 UTC |
	|         | force-systemd-flag-20220629182310-10091           |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:23 UTC | 29 Jun 22 18:25 UTC |
	|         | old-k8s-version-20220629182346-10091              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |          |         |         |                     |                     |
	| ssh     | cert-options-20220629182317-10091                 | minikube | jenkins | v1.26.0 | 29 Jun 22 18:23 UTC | 29 Jun 22 18:23 UTC |
	|         | ssh openssl x509 -text -noout -in                 |          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt             |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:23 UTC | 29 Jun 22 18:23 UTC |
	|         | cert-options-20220629182317-10091                 |          |         |         |                     |                     |
	|         | -- sudo cat                                       |          |         |         |                     |                     |
	|         | /etc/kubernetes/admin.conf                        |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:23 UTC | 29 Jun 22 18:24 UTC |
	|         | cert-options-20220629182317-10091                 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:24 UTC | 29 Jun 22 18:25 UTC |
	|         | no-preload-20220629182400-10091                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 18:25 UTC | 29 Jun 22 18:25 UTC |
	|         | no-preload-20220629182400-10091                   |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:25 UTC | 29 Jun 22 18:25 UTC |
	|         | no-preload-20220629182400-10091                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 18:25 UTC | 29 Jun 22 18:25 UTC |
	|         | no-preload-20220629182400-10091                   |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:25 UTC |                     |
	|         | no-preload-20220629182400-10091                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 18:25 UTC | 29 Jun 22 18:25 UTC |
	|         | old-k8s-version-20220629182346-10091              |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:25 UTC | 29 Jun 22 18:26 UTC |
	|         | old-k8s-version-20220629182346-10091              |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 18:26 UTC | 29 Jun 22 18:26 UTC |
	|         | old-k8s-version-20220629182346-10091              |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:26 UTC |                     |
	|         | old-k8s-version-20220629182346-10091              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:26 UTC | 29 Jun 22 18:26 UTC |
	|         | cert-expiration-20220629182257-10091              |          |         |         |                     |                     |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --cert-expiration=8760h                           |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --container-runtime=containerd                    |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:26 UTC | 29 Jun 22 18:26 UTC |
	|         | cert-expiration-20220629182257-10091              |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:26 UTC | 29 Jun 22 18:27 UTC |
	|         | default-k8s-different-port-20220629182651-10091   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 18:27 UTC | 29 Jun 22 18:27 UTC |
	|         | default-k8s-different-port-20220629182651-10091   |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:27 UTC | 29 Jun 22 18:28 UTC |
	|         | default-k8s-different-port-20220629182651-10091   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 18:28 UTC | 29 Jun 22 18:28 UTC |
	|         | default-k8s-different-port-20220629182651-10091   |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 18:28 UTC |                     |
	|         | default-k8s-different-port-20220629182651-10091   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |          |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 18:28:06
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 18:28:06.565891  210742 out.go:296] Setting OutFile to fd 1 ...
	I0629 18:28:06.566009  210742 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:28:06.566022  210742 out.go:309] Setting ErrFile to fd 2...
	I0629 18:28:06.566029  210742 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:28:06.566483  210742 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 18:28:06.566745  210742 out.go:303] Setting JSON to false
	I0629 18:28:06.568296  210742 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4237,"bootTime":1656523050,"procs":810,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1033-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0629 18:28:06.568364  210742 start.go:125] virtualization: kvm guest
	I0629 18:28:06.571363  210742 out.go:177] * [default-k8s-different-port-20220629182651-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0629 18:28:06.572892  210742 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 18:28:06.572908  210742 notify.go:193] Checking for updates...
	I0629 18:28:06.574418  210742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 18:28:06.576285  210742 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 18:28:06.577728  210742 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 18:28:06.579085  210742 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0629 18:28:02.419215  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:04.918870  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:06.580747  210742 config.go:178] Loaded profile config "default-k8s-different-port-20220629182651-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:28:06.581136  210742 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 18:28:06.621023  210742 docker.go:137] docker version: linux-20.10.17
	I0629 18:28:06.621168  210742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:28:06.721467  210742 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-06-29 18:28:06.648938709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1033-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:28:06.721573  210742 docker.go:254] overlay module found
	I0629 18:28:06.723715  210742 out.go:177] * Using the docker driver based on existing profile
	I0629 18:28:06.725067  210742 start.go:284] selected driver: docker
	I0629 18:28:06.725082  210742 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220629182651-10091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port
-20220629182651-10091 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true syst
em_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:28:06.725179  210742 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 18:28:06.726073  210742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:28:06.829064  210742 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-06-29 18:28:06.754089455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1033-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:28:06.829337  210742 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 18:28:06.829358  210742 cni.go:95] Creating CNI manager for ""
	I0629 18:28:06.829367  210742 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0629 18:28:06.829377  210742 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220629182651-10091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220629182651-10091 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:28:06.832022  210742 out.go:177] * Starting control plane node default-k8s-different-port-20220629182651-10091 in cluster default-k8s-different-port-20220629182651-10091
	I0629 18:28:06.833242  210742 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0629 18:28:06.834674  210742 out.go:177] * Pulling base image ...
	I0629 18:28:06.835978  210742 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0629 18:28:06.836002  210742 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 18:28:06.836013  210742 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0629 18:28:06.836022  210742 cache.go:57] Caching tarball of preloaded images
	I0629 18:28:06.836198  210742 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 18:28:06.836220  210742 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0629 18:28:06.836350  210742 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/config.json ...
	I0629 18:28:06.868678  210742 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 18:28:06.868702  210742 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 18:28:06.868710  210742 cache.go:208] Successfully downloaded all kic artifacts
	I0629 18:28:06.868741  210742 start.go:352] acquiring machines lock for default-k8s-different-port-20220629182651-10091: {Name:mk4e1ad3abed7e8b4e09df91b1c845f1c6c4a994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 18:28:06.868817  210742 start.go:356] acquired machines lock for "default-k8s-different-port-20220629182651-10091" in 60.165µs
	I0629 18:28:06.868834  210742 start.go:94] Skipping create...Using existing machine configuration
	I0629 18:28:06.868839  210742 fix.go:55] fixHost starting: 
	I0629 18:28:06.869102  210742 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629182651-10091 --format={{.State.Status}}
	I0629 18:28:06.900052  210742 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220629182651-10091: state=Stopped err=<nil>
	W0629 18:28:06.900096  210742 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 18:28:06.902257  210742 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220629182651-10091" ...
	I0629 18:28:06.275354  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:08.276139  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:10.775605  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:06.903668  210742 cli_runner.go:164] Run: docker start default-k8s-different-port-20220629182651-10091
	I0629 18:28:07.251493  210742 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629182651-10091 --format={{.State.Status}}
	I0629 18:28:07.290721  210742 kic.go:416] container "default-k8s-different-port-20220629182651-10091" state is running.
	I0629 18:28:07.291188  210742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220629182651-10091
	I0629 18:28:07.328159  210742 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/config.json ...
	I0629 18:28:07.328373  210742 machine.go:88] provisioning docker machine ...
	I0629 18:28:07.328401  210742 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220629182651-10091"
	I0629 18:28:07.328448  210742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629182651-10091
	I0629 18:28:07.361607  210742 main.go:134] libmachine: Using SSH client type: native
	I0629 18:28:07.361786  210742 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49392 <nil> <nil>}
	I0629 18:28:07.361811  210742 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220629182651-10091 && echo "default-k8s-different-port-20220629182651-10091" | sudo tee /etc/hostname
	I0629 18:28:07.362480  210742 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56788->127.0.0.1:49392: read: connection reset by peer
	I0629 18:28:10.484851  210742 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220629182651-10091
	
	I0629 18:28:10.484939  210742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629182651-10091
	I0629 18:28:10.515899  210742 main.go:134] libmachine: Using SSH client type: native
	I0629 18:28:10.516067  210742 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49392 <nil> <nil>}
	I0629 18:28:10.516094  210742 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220629182651-10091' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220629182651-10091/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220629182651-10091' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 18:28:10.628481  210742 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 18:28:10.628509  210742 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 18:28:10.628535  210742 ubuntu.go:177] setting up certificates
	I0629 18:28:10.628545  210742 provision.go:83] configureAuth start
	I0629 18:28:10.628605  210742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220629182651-10091
	I0629 18:28:10.659889  210742 provision.go:138] copyHostCerts
	I0629 18:28:10.659949  210742 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 18:28:10.659963  210742 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 18:28:10.660024  210742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1078 bytes)
	I0629 18:28:10.660112  210742 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 18:28:10.660124  210742 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 18:28:10.660148  210742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 18:28:10.660213  210742 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 18:28:10.660222  210742 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 18:28:10.660242  210742 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1679 bytes)
	I0629 18:28:10.660297  210742 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220629182651-10091 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220629182651-10091]
	I0629 18:28:11.024692  210742 provision.go:172] copyRemoteCerts
	I0629 18:28:11.024752  210742 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 18:28:11.024790  210742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629182651-10091
	I0629 18:28:11.059070  210742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629182651-10091/id_rsa Username:docker}
	I0629 18:28:11.143840  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0629 18:28:11.160498  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0629 18:28:11.176620  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0629 18:28:11.192531  210742 provision.go:86] duration metric: configureAuth took 563.968434ms
	I0629 18:28:11.192560  210742 ubuntu.go:193] setting minikube options for container-runtime
	I0629 18:28:11.192723  210742 config.go:178] Loaded profile config "default-k8s-different-port-20220629182651-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:28:11.192736  210742 machine.go:91] provisioned docker machine in 3.86434874s
	I0629 18:28:11.192743  210742 start.go:306] post-start starting for "default-k8s-different-port-20220629182651-10091" (driver="docker")
	I0629 18:28:11.192748  210742 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 18:28:11.192782  210742 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 18:28:11.192815  210742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629182651-10091
	I0629 18:28:11.223911  210742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629182651-10091/id_rsa Username:docker}
	I0629 18:28:11.312065  210742 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 18:28:11.314655  210742 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 18:28:11.314675  210742 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 18:28:11.314687  210742 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 18:28:11.314694  210742 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 18:28:11.314710  210742 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 18:28:11.314762  210742 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 18:28:11.314848  210742 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/100912.pem -> 100912.pem in /etc/ssl/certs
	I0629 18:28:11.314941  210742 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 18:28:11.321074  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/100912.pem --> /etc/ssl/certs/100912.pem (1708 bytes)
	I0629 18:28:11.337228  210742 start.go:309] post-start completed in 144.475124ms
	I0629 18:28:11.337281  210742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 18:28:11.337312  210742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629182651-10091
	I0629 18:28:11.368701  210742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629182651-10091/id_rsa Username:docker}
	I0629 18:28:11.448902  210742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 18:28:11.452537  210742 fix.go:57] fixHost completed within 4.583693562s
	I0629 18:28:11.452561  210742 start.go:81] releasing machines lock for "default-k8s-different-port-20220629182651-10091", held for 4.583731372s
	I0629 18:28:11.452647  210742 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220629182651-10091
	I0629 18:28:11.484705  210742 ssh_runner.go:195] Run: systemctl --version
	I0629 18:28:11.484749  210742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629182651-10091
	I0629 18:28:11.484792  210742 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 18:28:11.484877  210742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629182651-10091
	I0629 18:28:11.516788  210742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629182651-10091/id_rsa Username:docker}
	I0629 18:28:11.516849  210742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629182651-10091/id_rsa Username:docker}
	I0629 18:28:06.919747  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:09.418738  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:11.418927  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:12.775778  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:14.776210  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:11.620067  210742 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0629 18:28:11.631094  210742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 18:28:11.639773  210742 docker.go:179] disabling docker service ...
	I0629 18:28:11.639819  210742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0629 18:28:11.648586  210742 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0629 18:28:11.656899  210742 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0629 18:28:11.731315  210742 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0629 18:28:11.815560  210742 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0629 18:28:11.824453  210742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 18:28:11.836652  210742 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0629 18:28:11.843820  210742 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0629 18:28:11.850857  210742 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0629 18:28:11.858330  210742 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0629 18:28:11.865784  210742 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0629 18:28:11.872998  210742 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0629 18:28:11.885010  210742 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0629 18:28:11.890874  210742 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0629 18:28:11.897295  210742 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 18:28:11.975283  210742 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0629 18:28:12.042608  210742 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0629 18:28:12.042671  210742 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0629 18:28:12.046055  210742 start.go:468] Will wait 60s for crictl version
	I0629 18:28:12.046108  210742 ssh_runner.go:195] Run: sudo crictl version
	I0629 18:28:12.071358  210742 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-29T18:28:12Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0629 18:28:13.918661  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:15.918748  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:17.275516  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:19.275592  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:17.918827  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:20.418710  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	W0629 18:28:20.959452  156452 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.13.0-1033-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0629 18:26:24.969812    7613 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1033-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0629 18:28:20.959513  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0629 18:28:21.618478  156452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 18:28:21.628578  156452 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 18:28:21.628635  156452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 18:28:21.635774  156452 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 18:28:21.635822  156452 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 18:28:21.872400  156452 out.go:204]   - Generating certificates and keys ...
	I0629 18:28:23.119570  210742 ssh_runner.go:195] Run: sudo crictl version
	I0629 18:28:23.143154  210742 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0629 18:28:23.143217  210742 ssh_runner.go:195] Run: containerd --version
	I0629 18:28:23.169577  210742 ssh_runner.go:195] Run: containerd --version
	I0629 18:28:23.198282  210742 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0629 18:28:21.775968  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:24.274829  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:23.199801  210742 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220629182651-10091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 18:28:23.235446  210742 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0629 18:28:23.238651  210742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 18:28:23.247631  210742 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0629 18:28:23.247680  210742 ssh_runner.go:195] Run: sudo crictl images --output json
	I0629 18:28:23.272799  210742 containerd.go:547] all images are preloaded for containerd runtime.
	I0629 18:28:23.272824  210742 containerd.go:461] Images already preloaded, skipping extraction
	I0629 18:28:23.272901  210742 ssh_runner.go:195] Run: sudo crictl images --output json
	I0629 18:28:23.297786  210742 containerd.go:547] all images are preloaded for containerd runtime.
	I0629 18:28:23.297811  210742 cache_images.go:84] Images are preloaded, skipping loading
	I0629 18:28:23.297859  210742 ssh_runner.go:195] Run: sudo crictl info
	I0629 18:28:23.320502  210742 cni.go:95] Creating CNI manager for ""
	I0629 18:28:23.320529  210742 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0629 18:28:23.320542  210742 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 18:28:23.320559  210742 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220629182651-10091 NodeName:default-k8s-different-port-20220629182651-10091 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 18:28:23.320732  210742 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220629182651-10091"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 18:28:23.320849  210742 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220629182651-10091 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220629182651-10091 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0629 18:28:23.320928  210742 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 18:28:23.327699  210742 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 18:28:23.327748  210742 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 18:28:23.334037  210742 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0629 18:28:23.345798  210742 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 18:28:23.357580  210742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0629 18:28:23.369345  210742 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0629 18:28:23.372114  210742 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 18:28:23.380675  210742 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091 for IP: 192.168.76.2
	I0629 18:28:23.380767  210742 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 18:28:23.380806  210742 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 18:28:23.380932  210742 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.key
	I0629 18:28:23.380984  210742 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/apiserver.key.31bdca25
	I0629 18:28:23.381031  210742 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/proxy-client.key
	I0629 18:28:23.381117  210742 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/10091.pem (1338 bytes)
	W0629 18:28:23.381144  210742 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/10091_empty.pem, impossibly tiny 0 bytes
	I0629 18:28:23.381161  210742 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1675 bytes)
	I0629 18:28:23.381194  210742 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1078 bytes)
	I0629 18:28:23.381219  210742 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 18:28:23.381247  210742 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1679 bytes)
	I0629 18:28:23.381283  210742 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/100912.pem (1708 bytes)
	I0629 18:28:23.381865  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 18:28:23.398215  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 18:28:23.414038  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 18:28:23.432423  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0629 18:28:23.448492  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 18:28:23.464067  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0629 18:28:23.479818  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 18:28:23.495793  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0629 18:28:23.511750  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/10091.pem --> /usr/share/ca-certificates/10091.pem (1338 bytes)
	I0629 18:28:23.527632  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/100912.pem --> /usr/share/ca-certificates/100912.pem (1708 bytes)
	I0629 18:28:23.543646  210742 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 18:28:23.559105  210742 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 18:28:23.570857  210742 ssh_runner.go:195] Run: openssl version
	I0629 18:28:23.576244  210742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 18:28:23.582757  210742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 18:28:23.585619  210742 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:53 /usr/share/ca-certificates/minikubeCA.pem
	I0629 18:28:23.585651  210742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 18:28:23.590031  210742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 18:28:23.596183  210742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10091.pem && ln -fs /usr/share/ca-certificates/10091.pem /etc/ssl/certs/10091.pem"
	I0629 18:28:23.602782  210742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10091.pem
	I0629 18:28:23.605576  210742 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/10091.pem
	I0629 18:28:23.605614  210742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10091.pem
	I0629 18:28:23.610016  210742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10091.pem /etc/ssl/certs/51391683.0"
	I0629 18:28:23.616285  210742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100912.pem && ln -fs /usr/share/ca-certificates/100912.pem /etc/ssl/certs/100912.pem"
	I0629 18:28:23.622880  210742 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100912.pem
	I0629 18:28:23.625733  210742 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/100912.pem
	I0629 18:28:23.625766  210742 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100912.pem
	I0629 18:28:23.630410  210742 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100912.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 18:28:23.637037  210742 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220629182651-10091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220629182651-1009
1 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartH
ostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:28:23.637133  210742 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0629 18:28:23.637175  210742 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0629 18:28:23.659605  210742 cri.go:87] found id: "0b5e72c2a77a272ce656dfe1c4b30b1df25536f8da6f720efc855dd316abf7eb"
	I0629 18:28:23.659622  210742 cri.go:87] found id: "9cbbae50ce14d0fa0e963b083a868bec333a4f6e1ac1ced59b8d9e48472c0d0a"
	I0629 18:28:23.659629  210742 cri.go:87] found id: "9a5be7953708436f4de1a120dadda358016eee6cd6c6cc81a8f2f6ed8939b385"
	I0629 18:28:23.659637  210742 cri.go:87] found id: "d6e488282255aa5ad3cdd1221874c734409869c0c1beaabf92068ab0b894c90f"
	I0629 18:28:23.659646  210742 cri.go:87] found id: "692aff0d6c84793075ef5a24fcbbffca9129217eaba313c176dc32f4cf8bdd8c"
	I0629 18:28:23.659659  210742 cri.go:87] found id: "bc82c69b829afa2467024c4f87023712bf63bb42ae0b0b87c89bf5f0c1270c54"
	I0629 18:28:23.659672  210742 cri.go:87] found id: "6082a9eaec20d5fe201692d73f439a6ac8de9d8ed13e702ec5121bf30c07a64f"
	I0629 18:28:23.659679  210742 cri.go:87] found id: "03d543da0e334ca7fa00d2a31f9c7289e04bcad3ddb4a2e7c0ed325b53ee8dfa"
	I0629 18:28:23.659687  210742 cri.go:87] found id: ""
	I0629 18:28:23.659717  210742 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0629 18:28:23.670441  210742 cri.go:114] JSON = null
	W0629 18:28:23.670490  210742 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0629 18:28:23.670527  210742 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 18:28:23.676935  210742 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 18:28:23.676955  210742 kubeadm.go:626] restartCluster start
	I0629 18:28:23.676984  210742 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 18:28:23.682850  210742 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:23.683474  210742 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220629182651-10091" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 18:28:23.683854  210742 kubeconfig.go:127] "default-k8s-different-port-20220629182651-10091" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 18:28:23.684507  210742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk893d9eb214a7622f390991dc9e953bb49b2322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 18:28:23.685871  210742 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 18:28:23.691831  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:23.691872  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:23.699088  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:23.899478  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:23.899594  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:23.907942  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:24.100121  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:24.100206  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:24.108484  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:24.299783  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:24.299839  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:24.308153  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:24.499225  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:24.499300  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:24.507597  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:24.699960  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:24.700036  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:24.708373  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:24.899633  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:24.899702  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:24.907940  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:25.100191  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:25.100264  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:25.108747  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:25.300009  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:25.300084  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:25.308437  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:25.499703  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:25.499766  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:25.508966  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:25.699181  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:25.699263  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:25.707595  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:25.899872  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:25.899960  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:25.908391  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:26.099684  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:26.099743  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:26.108045  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:26.299281  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:26.299343  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:26.307608  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:26.499946  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:26.500010  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:26.508116  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:22.419302  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:24.918207  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:22.517313  156452 out.go:204]   - Booting up control plane ...
	I0629 18:28:26.275446  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:28.776664  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:26.699692  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:26.699748  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:26.707930  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:26.707948  210742 api_server.go:165] Checking apiserver status ...
	I0629 18:28:26.707987  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 18:28:26.715565  210742 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:26.715590  210742 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 18:28:26.715597  210742 kubeadm.go:1092] stopping kube-system containers ...
	I0629 18:28:26.715607  210742 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0629 18:28:26.715652  210742 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0629 18:28:26.739230  210742 cri.go:87] found id: "0b5e72c2a77a272ce656dfe1c4b30b1df25536f8da6f720efc855dd316abf7eb"
	I0629 18:28:26.739253  210742 cri.go:87] found id: "9cbbae50ce14d0fa0e963b083a868bec333a4f6e1ac1ced59b8d9e48472c0d0a"
	I0629 18:28:26.739260  210742 cri.go:87] found id: "9a5be7953708436f4de1a120dadda358016eee6cd6c6cc81a8f2f6ed8939b385"
	I0629 18:28:26.739267  210742 cri.go:87] found id: "d6e488282255aa5ad3cdd1221874c734409869c0c1beaabf92068ab0b894c90f"
	I0629 18:28:26.739272  210742 cri.go:87] found id: "692aff0d6c84793075ef5a24fcbbffca9129217eaba313c176dc32f4cf8bdd8c"
	I0629 18:28:26.739278  210742 cri.go:87] found id: "bc82c69b829afa2467024c4f87023712bf63bb42ae0b0b87c89bf5f0c1270c54"
	I0629 18:28:26.739287  210742 cri.go:87] found id: "6082a9eaec20d5fe201692d73f439a6ac8de9d8ed13e702ec5121bf30c07a64f"
	I0629 18:28:26.739296  210742 cri.go:87] found id: "03d543da0e334ca7fa00d2a31f9c7289e04bcad3ddb4a2e7c0ed325b53ee8dfa"
	I0629 18:28:26.739306  210742 cri.go:87] found id: ""
	I0629 18:28:26.739313  210742 cri.go:232] Stopping containers: [0b5e72c2a77a272ce656dfe1c4b30b1df25536f8da6f720efc855dd316abf7eb 9cbbae50ce14d0fa0e963b083a868bec333a4f6e1ac1ced59b8d9e48472c0d0a 9a5be7953708436f4de1a120dadda358016eee6cd6c6cc81a8f2f6ed8939b385 d6e488282255aa5ad3cdd1221874c734409869c0c1beaabf92068ab0b894c90f 692aff0d6c84793075ef5a24fcbbffca9129217eaba313c176dc32f4cf8bdd8c bc82c69b829afa2467024c4f87023712bf63bb42ae0b0b87c89bf5f0c1270c54 6082a9eaec20d5fe201692d73f439a6ac8de9d8ed13e702ec5121bf30c07a64f 03d543da0e334ca7fa00d2a31f9c7289e04bcad3ddb4a2e7c0ed325b53ee8dfa]
	I0629 18:28:26.739353  210742 ssh_runner.go:195] Run: which crictl
	I0629 18:28:26.742101  210742 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 0b5e72c2a77a272ce656dfe1c4b30b1df25536f8da6f720efc855dd316abf7eb 9cbbae50ce14d0fa0e963b083a868bec333a4f6e1ac1ced59b8d9e48472c0d0a 9a5be7953708436f4de1a120dadda358016eee6cd6c6cc81a8f2f6ed8939b385 d6e488282255aa5ad3cdd1221874c734409869c0c1beaabf92068ab0b894c90f 692aff0d6c84793075ef5a24fcbbffca9129217eaba313c176dc32f4cf8bdd8c bc82c69b829afa2467024c4f87023712bf63bb42ae0b0b87c89bf5f0c1270c54 6082a9eaec20d5fe201692d73f439a6ac8de9d8ed13e702ec5121bf30c07a64f 03d543da0e334ca7fa00d2a31f9c7289e04bcad3ddb4a2e7c0ed325b53ee8dfa
	I0629 18:28:26.765987  210742 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 18:28:26.775979  210742 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 18:28:26.782457  210742 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun 29 18:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun 29 18:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jun 29 18:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun 29 18:27 /etc/kubernetes/scheduler.conf
	
	I0629 18:28:26.782498  210742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0629 18:28:26.788761  210742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0629 18:28:26.795185  210742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0629 18:28:26.801467  210742 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:26.801504  210742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0629 18:28:26.807370  210742 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0629 18:28:26.813535  210742 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 18:28:26.813571  210742 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0629 18:28:26.819373  210742 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 18:28:26.825908  210742 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 18:28:26.825923  210742 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 18:28:26.868310  210742 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 18:28:27.548312  210742 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 18:28:27.730625  210742 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 18:28:27.786611  210742 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 18:28:27.891864  210742 api_server.go:51] waiting for apiserver process to appear ...
	I0629 18:28:27.891928  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:28:28.401279  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:28:28.900998  210742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:28:28.972773  210742 api_server.go:71] duration metric: took 1.080908721s to wait for apiserver process to appear ...
	I0629 18:28:28.972802  210742 api_server.go:87] waiting for apiserver healthz status ...
	I0629 18:28:28.972818  210742 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0629 18:28:28.973243  210742 api_server.go:256] stopped: https://192.168.76.2:8444/healthz: Get "https://192.168.76.2:8444/healthz": dial tcp 192.168.76.2:8444: connect: connection refused
	I0629 18:28:29.473618  210742 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0629 18:28:26.918571  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:28.918872  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:30.919195  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:32.481963  210742 api_server.go:266] https://192.168.76.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0629 18:28:32.481991  210742 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 18:28:32.973414  210742 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0629 18:28:32.978061  210742 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 18:28:32.978089  210742 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 18:28:33.473458  210742 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0629 18:28:33.478197  210742 api_server.go:266] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 18:28:33.478227  210742 api_server.go:102] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 18:28:33.974336  210742 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0629 18:28:33.979592  210742 api_server.go:266] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0629 18:28:33.986614  210742 api_server.go:140] control plane version: v1.24.2
	I0629 18:28:33.986637  210742 api_server.go:130] duration metric: took 5.013829819s to wait for apiserver health ...
	I0629 18:28:33.986645  210742 cni.go:95] Creating CNI manager for ""
	I0629 18:28:33.986653  210742 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0629 18:28:33.988779  210742 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0629 18:28:31.275808  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:33.776049  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:33.990173  210742 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0629 18:28:33.993740  210742 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0629 18:28:33.993757  210742 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0629 18:28:34.008010  210742 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0629 18:28:35.153938  210742 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.145892277s)
	I0629 18:28:35.153974  210742 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 18:28:35.160426  210742 system_pods.go:59] 9 kube-system pods found
	I0629 18:28:35.160461  210742 system_pods.go:61] "coredns-6d4b75cb6d-m84hf" [fb4dd5a1-bf99-4e82-88d4-5203f2f07b45] Running
	I0629 18:28:35.160475  210742 system_pods.go:61] "etcd-default-k8s-different-port-20220629182651-10091" [0994e46b-9334-4f1c-abb3-2ef94327abcb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0629 18:28:35.160483  210742 system_pods.go:61] "kindnet-fzjz7" [bf390e91-bc07-474d-bac7-e5ddffaf5c5b] Running
	I0629 18:28:35.160493  210742 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220629182651-10091" [0b7fced2-9882-4645-9799-88d1b249f1e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 18:28:35.160502  210742 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220629182651-10091" [fcb877c2-4ca5-47be-b22a-ccc09ff30529] Running
	I0629 18:28:35.160507  210742 system_pods.go:61] "kube-proxy-9f8gl" [10fc1d4a-fba0-4988-b313-ebd0e194aa20] Running
	I0629 18:28:35.160520  210742 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220629182651-10091" [c81ae6b8-3e59-4e70-9871-f5ef43d67207] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0629 18:28:35.160538  210742 system_pods.go:61] "metrics-server-5c6f97fb75-glsff" [1a908898-2770-4b40-ba32-d12441a893e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 18:28:35.160550  210742 system_pods.go:61] "storage-provisioner" [1cfec9cd-6f2e-4c60-aab2-4a0e6dbd3d63] Running
	I0629 18:28:35.160561  210742 system_pods.go:74] duration metric: took 6.578837ms to wait for pod list to return data ...
	I0629 18:28:35.160572  210742 node_conditions.go:102] verifying NodePressure condition ...
	I0629 18:28:35.163055  210742 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0629 18:28:35.163090  210742 node_conditions.go:123] node cpu capacity is 8
	I0629 18:28:35.163102  210742 node_conditions.go:105] duration metric: took 2.525709ms to run NodePressure ...
	I0629 18:28:35.163115  210742 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 18:28:35.292250  210742 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0629 18:28:35.297764  210742 kubeadm.go:777] kubelet initialised
	I0629 18:28:35.297835  210742 kubeadm.go:778] duration metric: took 5.555297ms waiting for restarted kubelet to initialise ...
	I0629 18:28:35.297853  210742 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 18:28:35.314614  210742 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-m84hf" in "kube-system" namespace to be "Ready" ...
	I0629 18:28:35.319444  210742 pod_ready.go:92] pod "coredns-6d4b75cb6d-m84hf" in "kube-system" namespace has status "Ready":"True"
	I0629 18:28:35.319463  210742 pod_ready.go:81] duration metric: took 4.811978ms waiting for pod "coredns-6d4b75cb6d-m84hf" in "kube-system" namespace to be "Ready" ...
	I0629 18:28:35.319470  210742 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace to be "Ready" ...
	I0629 18:28:33.419006  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:35.918723  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:36.274982  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:38.275829  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:40.775836  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:37.377402  210742 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:39.377687  210742 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:37.918953  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:39.919001  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:43.275721  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:45.276300  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:41.878161  210742 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:43.377884  210742 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace has status "Ready":"True"
	I0629 18:28:43.377914  210742 pod_ready.go:81] duration metric: took 8.05843699s waiting for pod "etcd-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace to be "Ready" ...
	I0629 18:28:43.377931  210742 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace to be "Ready" ...
	I0629 18:28:45.387674  210742 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace has status "Ready":"True"
	I0629 18:28:45.387705  210742 pod_ready.go:81] duration metric: took 2.009766127s waiting for pod "kube-apiserver-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace to be "Ready" ...
	I0629 18:28:45.387717  210742 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace to be "Ready" ...
	I0629 18:28:45.391644  210742 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace has status "Ready":"True"
	I0629 18:28:45.391659  210742 pod_ready.go:81] duration metric: took 3.933305ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace to be "Ready" ...
	I0629 18:28:45.391668  210742 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9f8gl" in "kube-system" namespace to be "Ready" ...
	I0629 18:28:45.395529  210742 pod_ready.go:92] pod "kube-proxy-9f8gl" in "kube-system" namespace has status "Ready":"True"
	I0629 18:28:45.395544  210742 pod_ready.go:81] duration metric: took 3.870722ms waiting for pod "kube-proxy-9f8gl" in "kube-system" namespace to be "Ready" ...
	I0629 18:28:45.395551  210742 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace to be "Ready" ...
	I0629 18:28:45.398995  210742 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace has status "Ready":"True"
	I0629 18:28:45.399014  210742 pod_ready.go:81] duration metric: took 3.457505ms waiting for pod "kube-scheduler-default-k8s-different-port-20220629182651-10091" in "kube-system" namespace to be "Ready" ...
	I0629 18:28:45.399025  210742 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace to be "Ready" ...
	I0629 18:28:42.419063  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:44.918578  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:47.774881  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:49.775563  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:47.407719  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:49.408422  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:51.408642  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:46.919901  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:49.420533  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:52.275029  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:54.775783  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:53.908237  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:55.908382  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:51.919064  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:54.419024  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:56.775819  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:58.775932  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:58.408569  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:00.908376  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:56.918980  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:28:58.919429  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:01.418395  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:01.275571  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:03.275965  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:05.774834  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:03.408121  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:05.408827  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:03.418726  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:05.418939  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:07.775665  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:09.776185  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:07.907910  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:09.908322  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:07.918259  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:09.919482  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:12.275718  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:14.275785  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:12.408446  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:14.908299  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:12.418577  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:14.419478  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:16.775107  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:18.775884  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:17.408006  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:19.410322  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:16.919481  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:19.419070  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:21.275172  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:23.275379  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:25.775344  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:21.908420  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:24.408660  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:26.409225  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:21.919135  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:24.418788  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:27.775980  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:30.275670  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:28.908232  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:31.407979  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:26.918548  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:28.918811  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:31.418702  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:32.276399  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:34.775336  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:33.408257  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:35.409092  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:33.919045  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:36.418737  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:36.775497  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:39.275358  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:37.907560  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:39.907896  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:38.418911  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:40.919076  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:41.276113  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:43.775631  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:42.409214  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:44.908115  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:43.420482  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:45.918998  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:46.275109  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:48.275861  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:50.275913  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:47.408113  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:49.408467  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:48.418767  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:50.418905  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:52.775353  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:54.776083  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:51.908025  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:54.408224  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:52.919122  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:54.919686  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:57.275641  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:59.275688  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:56.925827  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:59.408546  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:56.920530  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:29:59.418237  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:01.775469  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:03.775655  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:01.907918  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:04.408256  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:01.918893  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:04.419039  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:06.275573  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:08.275668  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:10.775676  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:06.908276  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:09.407891  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:11.408627  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:06.918967  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:08.919138  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:11.418748  195900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:12.775724  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:14.776008  199913 pod_ready.go:102] pod "metrics-server-7958775c-jcgpl" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:13.414007  195900 pod_ready.go:81] duration metric: took 4m0.383378352s waiting for pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace to be "Ready" ...
	E0629 18:30:13.414028  195900 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-455kd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0629 18:30:13.414044  195900 pod_ready.go:38] duration metric: took 4m13.419284768s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 18:30:13.414097  195900 kubeadm.go:630] restartCluster took 4m24.875544724s
	W0629 18:30:13.414229  195900 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0629 18:30:13.414287  195900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0629 18:30:15.749625  195900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.335313374s)
	I0629 18:30:15.749680  195900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 18:30:15.759866  195900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 18:30:15.767587  195900 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 18:30:15.767644  195900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 18:30:15.775023  195900 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 18:30:15.775068  195900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 18:30:13.908252  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:15.908615  210742 pod_ready.go:102] pod "metrics-server-5c6f97fb75-glsff" in "kube-system" namespace has status "Ready":"False"
	I0629 18:30:16.022393  195900 out.go:204]   - Generating certificates and keys ...
	I0629 18:30:17.533443  156452 kubeadm.go:397] StartCluster complete in 7m55.902652472s
	I0629 18:30:17.533481  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0629 18:30:17.533520  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0629 18:30:17.557148  156452 cri.go:87] found id: ""
	I0629 18:30:17.557171  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.557177  156452 logs.go:276] No container was found matching "kube-apiserver"
	I0629 18:30:17.557182  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0629 18:30:17.557227  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0629 18:30:17.580224  156452 cri.go:87] found id: ""
	I0629 18:30:17.580252  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.580260  156452 logs.go:276] No container was found matching "etcd"
	I0629 18:30:17.580266  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0629 18:30:17.580328  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0629 18:30:17.603094  156452 cri.go:87] found id: ""
	I0629 18:30:17.603123  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.603132  156452 logs.go:276] No container was found matching "coredns"
	I0629 18:30:17.603138  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0629 18:30:17.603179  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0629 18:30:17.625492  156452 cri.go:87] found id: ""
	I0629 18:30:17.625512  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.625519  156452 logs.go:276] No container was found matching "kube-scheduler"
	I0629 18:30:17.625524  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0629 18:30:17.625563  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0629 18:30:17.647042  156452 cri.go:87] found id: ""
	I0629 18:30:17.647063  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.647068  156452 logs.go:276] No container was found matching "kube-proxy"
	I0629 18:30:17.647074  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0629 18:30:17.647113  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0629 18:30:17.668741  156452 cri.go:87] found id: ""
	I0629 18:30:17.668763  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.668770  156452 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 18:30:17.668775  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0629 18:30:17.668822  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0629 18:30:17.694345  156452 cri.go:87] found id: ""
	I0629 18:30:17.694371  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.694383  156452 logs.go:276] No container was found matching "storage-provisioner"
	I0629 18:30:17.694391  156452 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0629 18:30:17.694447  156452 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0629 18:30:17.718181  156452 cri.go:87] found id: ""
	I0629 18:30:17.718202  156452 logs.go:274] 0 containers: []
	W0629 18:30:17.718208  156452 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 18:30:17.718218  156452 logs.go:123] Gathering logs for describe nodes ...
	I0629 18:30:17.718235  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 18:30:17.765334  156452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 18:30:17.765355  156452 logs.go:123] Gathering logs for containerd ...
	I0629 18:30:17.765370  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0629 18:30:17.820076  156452 logs.go:123] Gathering logs for container status ...
	I0629 18:30:17.820117  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 18:30:17.851247  156452 logs.go:123] Gathering logs for kubelet ...
	I0629 18:30:17.851280  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0629 18:30:17.918926  156452 logs.go:138] Found kubelet problem: Jun 29 18:30:17 kubernetes-upgrade-20220629182055-10091 kubelet[11610]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:30:17.965386  156452 logs.go:123] Gathering logs for dmesg ...
	I0629 18:30:17.965422  156452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0629 18:30:17.982581  156452 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.13.0-1033-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0629 18:28:21.666839    9720 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1033-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0629 18:30:17.982627  156452 out.go:239] * 
	W0629 18:30:17.982853  156452 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.13.0-1033-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0629 18:28:21.666839    9720 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1033-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 18:30:17.982887  156452 out.go:239] * 
	W0629 18:30:17.983620  156452 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 18:30:17.986243  156452 out.go:177] X Problems detected in kubelet:
	I0629 18:30:17.987871  156452 out.go:177]   Jun 29 18:30:17 kubernetes-upgrade-20220629182055-10091 kubelet[11610]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0629 18:30:17.991405  156452 out.go:177] 
	W0629 18:30:17.993287  156452 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.13.0-1033-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0629 18:28:21.666839    9720 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1033-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 18:30:17.993404  156452 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0629 18:30:17.993467  156452 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0629 18:30:17.995197  156452 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-06-29 18:21:43 UTC, end at Wed 2022-06-29 18:30:19 UTC. --
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.425516077Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.441932642Z" level=info msg="StopPodSandbox for \"this\""
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.441981892Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.457362416Z" level=info msg="StopPodSandbox for \"endpoint\""
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.457422811Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.473956984Z" level=info msg="StopPodSandbox for \"is\""
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.474020902Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.490521607Z" level=info msg="StopPodSandbox for \"deprecated,\""
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.490567782Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.505989290Z" level=info msg="StopPodSandbox for \"please\""
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.506040579Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.521230126Z" level=info msg="StopPodSandbox for \"consider\""
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.521282149Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.537496765Z" level=info msg="StopPodSandbox for \"using\""
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.537556928Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.553064472Z" level=info msg="StopPodSandbox for \"full\""
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.553133807Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.568543386Z" level=info msg="StopPodSandbox for \"URL\""
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.568587186Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.583256557Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.583300786Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.598287037Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.598345607Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.613966432Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jun 29 18:28:21 kubernetes-upgrade-20220629182055-10091 containerd[504]: time="2022-06-29T18:28:21.614015753Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: 02 42 09 9d 99 13 02 42 c0 a8 55 02 08 00
	[  +1.025362] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f495d51618b
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f495d51618b
	[  +0.000005] ll header: 00000000: 02 42 09 9d 99 13 02 42 c0 a8 55 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 09 9d 99 13 02 42 c0 a8 55 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f495d51618b
	[  +0.000002] ll header: 00000000: 02 42 09 9d 99 13 02 42 c0 a8 55 02 08 00
	[  +2.015801] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f495d51618b
	[  +0.000006] ll header: 00000000: 02 42 09 9d 99 13 02 42 c0 a8 55 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f495d51618b
	[  +0.000002] ll header: 00000000: 02 42 09 9d 99 13 02 42 c0 a8 55 02 08 00
	[  +0.003946] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f495d51618b
	[  +0.000005] ll header: 00000000: 02 42 09 9d 99 13 02 42 c0 a8 55 02 08 00
	[  +4.091633] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f495d51618b
	[  +0.000007] ll header: 00000000: 02 42 09 9d 99 13 02 42 c0 a8 55 02 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f495d51618b
	[  +0.000001] ll header: 00000000: 02 42 09 9d 99 13 02 42 c0 a8 55 02 08 00
	[  +0.003938] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f495d51618b
	[  +0.000005] ll header: 00000000: 02 42 09 9d 99 13 02 42 c0 a8 55 02 08 00
	[  +8.187204] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f495d51618b
	[  +0.000005] ll header: 00000000: 02 42 09 9d 99 13 02 42 c0 a8 55 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f495d51618b
	[  +0.000001] ll header: 00000000: 02 42 09 9d 99 13 02 42 c0 a8 55 02 08 00
	[  +0.003967] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2f495d51618b
	[  +0.000005] ll header: 00000000: 02 42 09 9d 99 13 02 42 c0 a8 55 02 08 00
	
	* 
	* ==> kernel <==
	*  18:30:19 up  1:12,  0 users,  load average: 0.91, 1.67, 1.61
	Linux kubernetes-upgrade-20220629182055-10091 5.13.0-1033-gcp #40~20.04.1-Ubuntu SMP Tue Jun 14 00:44:12 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 18:21:43 UTC, end at Wed 2022-06-29 18:30:19 UTC. --
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --storage-driver-buffer-duration duration                  Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction (default 1m0s) (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --storage-driver-db string                                 database name (default "cadvisor") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --storage-driver-host string                               database host:port (default "localhost:8086") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --storage-driver-password string                           database password (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --storage-driver-secure                                    use secure connection with database (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --storage-driver-table string                              table name (default "stats") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --storage-driver-user string                               database username (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --streaming-connection-idle-timeout duration               Maximum time a streaming connection can be idle before the connection is automatically closed. 0 indicates no timeout. Example: '5m'. Note: All connections to the kubelet server have a maximum duration of 4 hours. (default 4h0m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --sync-frequency duration                                  Max period between synchronizing running containers and config (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --system-cgroups string                                    Optional absolute name of cgroups in which to place all non-kernel processes that are not already inside a cgroup under '/'. Empty for no container. Rolling back the flag requires a reboot. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --system-reserved mapStringString                          A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi) pairs that describe resources reserved for non-kubernetes components. Currently only cpu and memory are supported. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for more detail. [default=none] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --system-reserved-cgroup string                            Absolute name of the top level cgroup that is used to manage non-kubernetes components for which compute resources were reserved via '--system-reserved' flag. Ex. '/system-reserved'. [default=''] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --tls-cert-file string                                     File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --tls-cipher-suites strings                                Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:                 Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:                 Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --tls-min-version string                                   Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --tls-private-key-file string                              File containing x509 private key matching --tls-cert-file. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --topology-manager-policy string                           Topology Manager policy to use. Possible values: 'none', 'best-effort', 'restricted', 'single-numa-node'. (default "none") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --topology-manager-scope string                            Scope to which topology hints applied. Topology Manager collects hints from Hint Providers and applies them to defined scope to ensure the pod admission. Possible values: 'container', 'pod'. (default "container") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:   -v, --v Level                                                  number for the log level verbosity
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --version version[=true]                                   Print version information and quit
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --vmodule pattern=N,...                                    comma-separated list of pattern=N settings for file-filtered logging (only works for text log format)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --volume-plugin-dir string                                 The full path of the directory in which to search for additional third party volume plugins (default "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jun 29 18:30:18 kubernetes-upgrade-20220629182055-10091 kubelet[11778]:       --volume-stats-agg-period duration                         Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes.  To disable volume calculations, set to a negative number. (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 18:30:19.410426  218329 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220629182055-10091 -n kubernetes-upgrade-20220629182055-10091
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220629182055-10091 -n kubernetes-upgrade-20220629182055-10091: exit status 2 (417.872032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-20220629182055-10091" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220629182055-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220629182055-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220629182055-10091: (2.221715595s)
--- FAIL: TestKubernetesUpgrade (566.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (517.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220629182253-10091 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220629182253-10091 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (8m37.569430066s)

                                                
                                                
-- stdout --
	* [calico-20220629182253-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node calico-20220629182253-10091 in cluster calico-20220629182253-10091
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 18:33:47.993597  255529 out.go:296] Setting OutFile to fd 1 ...
	I0629 18:33:47.993753  255529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:33:47.993759  255529 out.go:309] Setting ErrFile to fd 2...
	I0629 18:33:47.993766  255529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:33:47.994317  255529 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 18:33:47.994667  255529 out.go:303] Setting JSON to false
	I0629 18:33:47.996542  255529 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4578,"bootTime":1656523050,"procs":676,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1033-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0629 18:33:47.996622  255529 start.go:125] virtualization: kvm guest
	I0629 18:33:48.000964  255529 out.go:177] * [calico-20220629182253-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0629 18:33:48.002486  255529 notify.go:193] Checking for updates...
	I0629 18:33:48.004839  255529 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 18:33:48.006724  255529 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 18:33:48.013977  255529 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 18:33:48.015473  255529 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 18:33:48.016972  255529 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0629 18:33:48.018883  255529 config.go:178] Loaded profile config "cilium-20220629182253-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:33:48.019023  255529 config.go:178] Loaded profile config "embed-certs-20220629183112-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:33:48.019139  255529 config.go:178] Loaded profile config "kindnet-20220629182253-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:33:48.019202  255529 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 18:33:48.066392  255529 docker.go:137] docker version: linux-20.10.17
	I0629 18:33:48.066513  255529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:33:48.220990  255529 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-29 18:33:48.124959232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1033-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:33:48.221126  255529 docker.go:254] overlay module found
	I0629 18:33:48.223542  255529 out.go:177] * Using the docker driver based on user configuration
	I0629 18:33:48.225550  255529 start.go:284] selected driver: docker
	I0629 18:33:48.225570  255529 start.go:808] validating driver "docker" against <nil>
	I0629 18:33:48.225593  255529 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 18:33:48.226737  255529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:33:48.351015  255529 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-29 18:33:48.261366062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1033-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:33:48.351143  255529 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0629 18:33:48.351420  255529 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 18:33:48.354073  255529 out.go:177] * Using Docker driver with root privileges
	I0629 18:33:48.355457  255529 cni.go:95] Creating CNI manager for "calico"
	I0629 18:33:48.355484  255529 start_flags.go:305] Found "Calico" CNI - setting NetworkPlugin=cni
	I0629 18:33:48.355496  255529 start_flags.go:310] config:
	{Name:calico-20220629182253-10091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:calico-20220629182253-10091 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:33:48.357137  255529 out.go:177] * Starting control plane node calico-20220629182253-10091 in cluster calico-20220629182253-10091
	I0629 18:33:48.358454  255529 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0629 18:33:48.359820  255529 out.go:177] * Pulling base image ...
	I0629 18:33:48.361031  255529 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0629 18:33:48.361073  255529 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
	I0629 18:33:48.361088  255529 cache.go:57] Caching tarball of preloaded images
	I0629 18:33:48.361147  255529 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 18:33:48.361342  255529 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 18:33:48.361363  255529 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on containerd
	I0629 18:33:48.361491  255529 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/config.json ...
	I0629 18:33:48.361523  255529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/config.json: {Name:mk2c337d234b41c9e100704254886137a640db46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 18:33:48.404295  255529 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 18:33:48.404335  255529 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 18:33:48.404359  255529 cache.go:208] Successfully downloaded all kic artifacts
	I0629 18:33:48.404400  255529 start.go:352] acquiring machines lock for calico-20220629182253-10091: {Name:mkf277945b743a7c0d8f0588fdd9ac1d183d3401 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 18:33:48.404569  255529 start.go:356] acquired machines lock for "calico-20220629182253-10091" in 142.206µs
	I0629 18:33:48.404601  255529 start.go:91] Provisioning new machine with config: &{Name:calico-20220629182253-10091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:calico-20220629182253-10091 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0629 18:33:48.404699  255529 start.go:131] createHost starting for "" (driver="docker")
	I0629 18:33:48.407873  255529 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0629 18:33:48.408158  255529 start.go:165] libmachine.API.Create for "calico-20220629182253-10091" (driver="docker")
	I0629 18:33:48.408196  255529 client.go:168] LocalClient.Create starting
	I0629 18:33:48.408272  255529 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem
	I0629 18:33:48.408311  255529 main.go:134] libmachine: Decoding PEM data...
	I0629 18:33:48.408339  255529 main.go:134] libmachine: Parsing certificate...
	I0629 18:33:48.408410  255529 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem
	I0629 18:33:48.408433  255529 main.go:134] libmachine: Decoding PEM data...
	I0629 18:33:48.408451  255529 main.go:134] libmachine: Parsing certificate...
	I0629 18:33:48.408920  255529 cli_runner.go:164] Run: docker network inspect calico-20220629182253-10091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0629 18:33:48.446536  255529 cli_runner.go:211] docker network inspect calico-20220629182253-10091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0629 18:33:48.446609  255529 network_create.go:272] running [docker network inspect calico-20220629182253-10091] to gather additional debugging logs...
	I0629 18:33:48.446633  255529 cli_runner.go:164] Run: docker network inspect calico-20220629182253-10091
	W0629 18:33:48.478712  255529 cli_runner.go:211] docker network inspect calico-20220629182253-10091 returned with exit code 1
	I0629 18:33:48.478740  255529 network_create.go:275] error running [docker network inspect calico-20220629182253-10091]: docker network inspect calico-20220629182253-10091: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220629182253-10091
	I0629 18:33:48.478765  255529 network_create.go:277] output of [docker network inspect calico-20220629182253-10091]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220629182253-10091
	
	** /stderr **
	I0629 18:33:48.478810  255529 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 18:33:48.521366  255529 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-058f950f46d1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:52:66:85:72}}
	I0629 18:33:48.522585  255529 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-08c9cfc6bba4 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:90:42:e6:fb}}
	I0629 18:33:48.523927  255529 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-453145a05c1a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:7e:31:2a:36}}
	I0629 18:33:48.525058  255529 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc000010138] misses:0}
	I0629 18:33:48.525095  255529 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 18:33:48.525109  255529 network_create.go:115] attempt to create docker network calico-20220629182253-10091 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0629 18:33:48.525155  255529 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220629182253-10091 calico-20220629182253-10091
	I0629 18:33:48.607163  255529 network_create.go:99] docker network calico-20220629182253-10091 192.168.76.0/24 created
	I0629 18:33:48.607197  255529 kic.go:106] calculated static IP "192.168.76.2" for the "calico-20220629182253-10091" container
	I0629 18:33:48.607263  255529 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0629 18:33:48.647721  255529 cli_runner.go:164] Run: docker volume create calico-20220629182253-10091 --label name.minikube.sigs.k8s.io=calico-20220629182253-10091 --label created_by.minikube.sigs.k8s.io=true
	I0629 18:33:48.678823  255529 oci.go:103] Successfully created a docker volume calico-20220629182253-10091
	I0629 18:33:48.678939  255529 cli_runner.go:164] Run: docker run --rm --name calico-20220629182253-10091-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220629182253-10091 --entrypoint /usr/bin/test -v calico-20220629182253-10091:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	I0629 18:33:49.302931  255529 oci.go:107] Successfully prepared a docker volume calico-20220629182253-10091
	I0629 18:33:49.302989  255529 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0629 18:33:49.303015  255529 kic.go:179] Starting extracting preloaded images to volume ...
	I0629 18:33:49.303098  255529 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220629182253-10091:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir
	I0629 18:33:56.341304  255529 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220629182253-10091:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir: (7.038124679s)
	I0629 18:33:56.341347  255529 kic.go:188] duration metric: took 7.038328 seconds to extract preloaded images to volume
	W0629 18:33:56.341510  255529 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0629 18:33:56.341646  255529 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0629 18:33:56.508191  255529 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220629182253-10091 --name calico-20220629182253-10091 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220629182253-10091 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220629182253-10091 --network calico-20220629182253-10091 --ip 192.168.76.2 --volume calico-20220629182253-10091:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e
	I0629 18:33:57.016805  255529 cli_runner.go:164] Run: docker container inspect calico-20220629182253-10091 --format={{.State.Running}}
	I0629 18:33:57.057029  255529 cli_runner.go:164] Run: docker container inspect calico-20220629182253-10091 --format={{.State.Status}}
	I0629 18:33:57.124019  255529 cli_runner.go:164] Run: docker exec calico-20220629182253-10091 stat /var/lib/dpkg/alternatives/iptables
	I0629 18:33:57.190883  255529 oci.go:144] the created container "calico-20220629182253-10091" has a running status.
	I0629 18:33:57.190920  255529 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/calico-20220629182253-10091/id_rsa...
	I0629 18:33:57.360553  255529 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/calico-20220629182253-10091/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0629 18:33:57.479879  255529 cli_runner.go:164] Run: docker container inspect calico-20220629182253-10091 --format={{.State.Status}}
	I0629 18:33:57.543791  255529 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0629 18:33:57.543816  255529 kic_runner.go:114] Args: [docker exec --privileged calico-20220629182253-10091 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0629 18:33:57.644339  255529 cli_runner.go:164] Run: docker container inspect calico-20220629182253-10091 --format={{.State.Status}}
	I0629 18:33:57.679287  255529 machine.go:88] provisioning docker machine ...
	I0629 18:33:57.679325  255529 ubuntu.go:169] provisioning hostname "calico-20220629182253-10091"
	I0629 18:33:57.679391  255529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629182253-10091
	I0629 18:33:57.718812  255529 main.go:134] libmachine: Using SSH client type: native
	I0629 18:33:57.719045  255529 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0629 18:33:57.719069  255529 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220629182253-10091 && echo "calico-20220629182253-10091" | sudo tee /etc/hostname
	I0629 18:33:57.857606  255529 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220629182253-10091
	
	I0629 18:33:57.857680  255529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629182253-10091
	I0629 18:33:57.895262  255529 main.go:134] libmachine: Using SSH client type: native
	I0629 18:33:57.895406  255529 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0629 18:33:57.895446  255529 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220629182253-10091' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220629182253-10091/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220629182253-10091' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 18:33:58.012882  255529 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 18:33:58.012926  255529 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 18:33:58.012958  255529 ubuntu.go:177] setting up certificates
	I0629 18:33:58.012973  255529 provision.go:83] configureAuth start
	I0629 18:33:58.013029  255529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629182253-10091
	I0629 18:33:58.046172  255529 provision.go:138] copyHostCerts
	I0629 18:33:58.046225  255529 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 18:33:58.046231  255529 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 18:33:58.046289  255529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1078 bytes)
	I0629 18:33:58.046365  255529 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 18:33:58.046385  255529 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 18:33:58.046407  255529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 18:33:58.046454  255529 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 18:33:58.046462  255529 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 18:33:58.046482  255529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1679 bytes)
	I0629 18:33:58.046524  255529 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.calico-20220629182253-10091 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220629182253-10091]
	I0629 18:33:58.467934  255529 provision.go:172] copyRemoteCerts
	I0629 18:33:58.467996  255529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 18:33:58.468047  255529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629182253-10091
	I0629 18:33:58.503599  255529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/calico-20220629182253-10091/id_rsa Username:docker}
	I0629 18:33:58.592944  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0629 18:33:58.611876  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0629 18:33:58.629096  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 18:33:58.650584  255529 provision.go:86] duration metric: configureAuth took 637.598452ms
	I0629 18:33:58.650614  255529 ubuntu.go:193] setting minikube options for container-runtime
	I0629 18:33:58.650800  255529 config.go:178] Loaded profile config "calico-20220629182253-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:33:58.650816  255529 machine.go:91] provisioned docker machine in 971.50776ms
	I0629 18:33:58.650824  255529 client.go:171] LocalClient.Create took 10.242622029s
	I0629 18:33:58.650844  255529 start.go:173] duration metric: libmachine.API.Create for "calico-20220629182253-10091" took 10.242684421s
	I0629 18:33:58.650856  255529 start.go:306] post-start starting for "calico-20220629182253-10091" (driver="docker")
	I0629 18:33:58.650869  255529 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 18:33:58.650921  255529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 18:33:58.650969  255529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629182253-10091
	I0629 18:33:58.687585  255529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/calico-20220629182253-10091/id_rsa Username:docker}
	I0629 18:33:58.780421  255529 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 18:33:58.783205  255529 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 18:33:58.783232  255529 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 18:33:58.783250  255529 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 18:33:58.783262  255529 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 18:33:58.783280  255529 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 18:33:58.783336  255529 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 18:33:58.783427  255529 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/100912.pem -> 100912.pem in /etc/ssl/certs
	I0629 18:33:58.783522  255529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 18:33:58.790442  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/100912.pem --> /etc/ssl/certs/100912.pem (1708 bytes)
	I0629 18:33:58.809952  255529 start.go:309] post-start completed in 159.076352ms
	I0629 18:33:58.810344  255529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629182253-10091
	I0629 18:33:58.842548  255529 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/config.json ...
	I0629 18:33:58.842821  255529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 18:33:58.842869  255529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629182253-10091
	I0629 18:33:58.874093  255529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/calico-20220629182253-10091/id_rsa Username:docker}
	I0629 18:33:58.957358  255529 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 18:33:58.961074  255529 start.go:134] duration metric: createHost completed in 10.556361788s
	I0629 18:33:58.961100  255529 start.go:81] releasing machines lock for "calico-20220629182253-10091", held for 10.556513571s
	I0629 18:33:58.961183  255529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220629182253-10091
	I0629 18:33:58.999127  255529 ssh_runner.go:195] Run: systemctl --version
	I0629 18:33:58.999178  255529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629182253-10091
	I0629 18:33:58.999227  255529 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 18:33:58.999293  255529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629182253-10091
	I0629 18:33:59.033415  255529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/calico-20220629182253-10091/id_rsa Username:docker}
	I0629 18:33:59.034549  255529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/calico-20220629182253-10091/id_rsa Username:docker}
	I0629 18:33:59.142511  255529 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0629 18:33:59.153540  255529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 18:33:59.162776  255529 docker.go:179] disabling docker service ...
	I0629 18:33:59.162828  255529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0629 18:33:59.181179  255529 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0629 18:33:59.191791  255529 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0629 18:33:59.274234  255529 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0629 18:33:59.354349  255529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0629 18:33:59.363534  255529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 18:33:59.376175  255529 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0629 18:33:59.383994  255529 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0629 18:33:59.391608  255529 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0629 18:33:59.399417  255529 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I0629 18:33:59.406695  255529 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0629 18:33:59.414241  255529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0629 18:33:59.426434  255529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0629 18:33:59.432407  255529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0629 18:33:59.438388  255529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 18:33:59.521131  255529 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0629 18:33:59.611101  255529 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0629 18:33:59.611174  255529 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0629 18:33:59.614756  255529 start.go:468] Will wait 60s for crictl version
	I0629 18:33:59.614813  255529 ssh_runner.go:195] Run: sudo crictl version
	I0629 18:33:59.647173  255529 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0629 18:33:59.647241  255529 ssh_runner.go:195] Run: containerd --version
	I0629 18:33:59.680575  255529 ssh_runner.go:195] Run: containerd --version
	I0629 18:33:59.711335  255529 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
	I0629 18:33:59.712827  255529 cli_runner.go:164] Run: docker network inspect calico-20220629182253-10091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 18:33:59.743803  255529 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0629 18:33:59.747016  255529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 18:33:59.756329  255529 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
	I0629 18:33:59.756395  255529 ssh_runner.go:195] Run: sudo crictl images --output json
	I0629 18:33:59.778572  255529 containerd.go:547] all images are preloaded for containerd runtime.
	I0629 18:33:59.778598  255529 containerd.go:461] Images already preloaded, skipping extraction
	I0629 18:33:59.778641  255529 ssh_runner.go:195] Run: sudo crictl images --output json
	I0629 18:33:59.800995  255529 containerd.go:547] all images are preloaded for containerd runtime.
	I0629 18:33:59.801016  255529 cache_images.go:84] Images are preloaded, skipping loading
	I0629 18:33:59.801051  255529 ssh_runner.go:195] Run: sudo crictl info
	I0629 18:33:59.823598  255529 cni.go:95] Creating CNI manager for "calico"
	I0629 18:33:59.823622  255529 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 18:33:59.823635  255529 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220629182253-10091 NodeName:calico-20220629182253-10091 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 18:33:59.823809  255529 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-20220629182253-10091"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 18:33:59.823903  255529 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-20220629182253-10091 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:calico-20220629182253-10091 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0629 18:33:59.823946  255529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 18:33:59.830669  255529 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 18:33:59.830727  255529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 18:33:59.837160  255529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (520 bytes)
	I0629 18:33:59.849552  255529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 18:33:59.862367  255529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2056 bytes)
	I0629 18:33:59.874691  255529 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0629 18:33:59.877494  255529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 18:33:59.886309  255529 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091 for IP: 192.168.76.2
	I0629 18:33:59.886412  255529 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 18:33:59.886445  255529 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 18:33:59.886488  255529 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/client.key
	I0629 18:33:59.886501  255529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/client.crt with IP's: []
	I0629 18:33:59.966884  255529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/client.crt ...
	I0629 18:33:59.966911  255529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/client.crt: {Name:mkf18c3cd76ef1b3844fd83eb02aae25c66dd76c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 18:33:59.967110  255529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/client.key ...
	I0629 18:33:59.967124  255529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/client.key: {Name:mk4eff57b1416711b80d02784b758f39254bd108 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 18:33:59.967208  255529 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/apiserver.key.31bdca25
	I0629 18:33:59.967226  255529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0629 18:34:00.071381  255529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/apiserver.crt.31bdca25 ...
	I0629 18:34:00.071409  255529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/apiserver.crt.31bdca25: {Name:mk016a5d80cce717f6e767845a7de4c8372e9d40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 18:34:00.071582  255529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/apiserver.key.31bdca25 ...
	I0629 18:34:00.071596  255529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/apiserver.key.31bdca25: {Name:mk1261c1dedcf41b227ef3fd63e4f13b3c24b7eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 18:34:00.071678  255529 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/apiserver.crt
	I0629 18:34:00.071730  255529 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/apiserver.key
	I0629 18:34:00.071771  255529 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/proxy-client.key
	I0629 18:34:00.071785  255529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/proxy-client.crt with IP's: []
	I0629 18:34:00.228241  255529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/proxy-client.crt ...
	I0629 18:34:00.228282  255529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/proxy-client.crt: {Name:mk7cdea30ed0adea86579a2ff23312bca8732e05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 18:34:00.228535  255529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/proxy-client.key ...
	I0629 18:34:00.228562  255529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/proxy-client.key: {Name:mk54c0690584e5361fedf537b1031f296147b74b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 18:34:00.228829  255529 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/10091.pem (1338 bytes)
	W0629 18:34:00.228906  255529 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/10091_empty.pem, impossibly tiny 0 bytes
	I0629 18:34:00.228926  255529 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1675 bytes)
	I0629 18:34:00.228957  255529 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1078 bytes)
	I0629 18:34:00.228998  255529 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 18:34:00.229037  255529 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1679 bytes)
	I0629 18:34:00.229095  255529 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/100912.pem (1708 bytes)
	I0629 18:34:00.229874  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 18:34:00.252912  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0629 18:34:00.270767  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 18:34:00.293026  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629182253-10091/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0629 18:34:00.310193  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 18:34:00.327461  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0629 18:34:00.347164  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 18:34:00.363554  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0629 18:34:00.384321  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/100912.pem --> /usr/share/ca-certificates/100912.pem (1708 bytes)
	I0629 18:34:00.405193  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 18:34:00.424047  255529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/10091.pem --> /usr/share/ca-certificates/10091.pem (1338 bytes)
	I0629 18:34:00.440743  255529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 18:34:00.453076  255529 ssh_runner.go:195] Run: openssl version
	I0629 18:34:00.458437  255529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100912.pem && ln -fs /usr/share/ca-certificates/100912.pem /etc/ssl/certs/100912.pem"
	I0629 18:34:00.465666  255529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100912.pem
	I0629 18:34:00.468721  255529 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/100912.pem
	I0629 18:34:00.468771  255529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100912.pem
	I0629 18:34:00.473979  255529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100912.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 18:34:00.481148  255529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 18:34:00.488314  255529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 18:34:00.491258  255529 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:53 /usr/share/ca-certificates/minikubeCA.pem
	I0629 18:34:00.491303  255529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 18:34:00.495818  255529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 18:34:00.502665  255529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10091.pem && ln -fs /usr/share/ca-certificates/10091.pem /etc/ssl/certs/10091.pem"
	I0629 18:34:00.509546  255529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10091.pem
	I0629 18:34:00.512616  255529 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/10091.pem
	I0629 18:34:00.512654  255529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10091.pem
	I0629 18:34:00.517573  255529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10091.pem /etc/ssl/certs/51391683.0"
	I0629 18:34:00.524286  255529 kubeadm.go:395] StartCluster: {Name:calico-20220629182253-10091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:calico-20220629182253-10091 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:34:00.524383  255529 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0629 18:34:00.524422  255529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0629 18:34:00.546990  255529 cri.go:87] found id: ""
	I0629 18:34:00.547043  255529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 18:34:00.553641  255529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 18:34:00.560010  255529 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 18:34:00.560057  255529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 18:34:00.566424  255529 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 18:34:00.566471  255529 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 18:34:00.823124  255529 out.go:204]   - Generating certificates and keys ...
	I0629 18:34:03.695709  255529 out.go:204]   - Booting up control plane ...
	I0629 18:34:11.239849  255529 out.go:204]   - Configuring RBAC rules ...
	I0629 18:34:11.680203  255529 cni.go:95] Creating CNI manager for "calico"
	I0629 18:34:11.681892  255529 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0629 18:34:11.683488  255529 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0629 18:34:11.683510  255529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202050 bytes)
	I0629 18:34:11.697876  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0629 18:34:12.975685  255529 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.277774526s)
	I0629 18:34:12.975745  255529 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 18:34:12.975814  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:12.975839  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed minikube.k8s.io/name=calico-20220629182253-10091 minikube.k8s.io/updated_at=2022_06_29T18_34_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:13.074339  255529 ops.go:34] apiserver oom_adj: -16
	I0629 18:34:13.074417  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:13.630033  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:14.130043  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:14.630764  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:15.130025  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:15.629919  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:16.130392  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:16.630157  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:17.130581  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:17.630100  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:18.129898  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:18.629974  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:19.129791  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:19.629831  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:20.130593  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:20.629890  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:21.130052  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:21.629857  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:22.130537  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:22.630068  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:23.130056  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:23.630130  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:24.130752  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:24.630346  255529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 18:34:24.779052  255529 kubeadm.go:1045] duration metric: took 11.803282515s to wait for elevateKubeSystemPrivileges.
	I0629 18:34:24.779088  255529 kubeadm.go:397] StartCluster complete in 24.254808411s
	I0629 18:34:24.779109  255529 settings.go:142] acquiring lock: {Name:mke696d35ff8da75ac31df4a4fa6335e03d5f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 18:34:24.779218  255529 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 18:34:24.781348  255529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk893d9eb214a7622f390991dc9e953bb49b2322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 18:34:25.297302  255529 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220629182253-10091" rescaled to 1
	I0629 18:34:25.297379  255529 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0629 18:34:25.299846  255529 out.go:177] * Verifying Kubernetes components...
	I0629 18:34:25.297540  255529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 18:34:25.297561  255529 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0629 18:34:25.297712  255529 config.go:178] Loaded profile config "calico-20220629182253-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:34:25.301507  255529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 18:34:25.301578  255529 addons.go:65] Setting storage-provisioner=true in profile "calico-20220629182253-10091"
	I0629 18:34:25.301605  255529 addons.go:65] Setting default-storageclass=true in profile "calico-20220629182253-10091"
	I0629 18:34:25.301611  255529 addons.go:153] Setting addon storage-provisioner=true in "calico-20220629182253-10091"
	I0629 18:34:25.301618  255529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220629182253-10091"
	W0629 18:34:25.301619  255529 addons.go:162] addon storage-provisioner should already be in state true
	I0629 18:34:25.301662  255529 host.go:66] Checking if "calico-20220629182253-10091" exists ...
	I0629 18:34:25.301988  255529 cli_runner.go:164] Run: docker container inspect calico-20220629182253-10091 --format={{.State.Status}}
	I0629 18:34:25.302167  255529 cli_runner.go:164] Run: docker container inspect calico-20220629182253-10091 --format={{.State.Status}}
	I0629 18:34:25.355436  255529 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 18:34:25.356955  255529 addons.go:153] Setting addon default-storageclass=true in "calico-20220629182253-10091"
	W0629 18:34:25.362158  255529 addons.go:162] addon default-storageclass should already be in state true
	I0629 18:34:25.362162  255529 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 18:34:25.362179  255529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 18:34:25.362196  255529 host.go:66] Checking if "calico-20220629182253-10091" exists ...
	I0629 18:34:25.362235  255529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629182253-10091
	I0629 18:34:25.362706  255529 cli_runner.go:164] Run: docker container inspect calico-20220629182253-10091 --format={{.State.Status}}
	I0629 18:34:25.400496  255529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0629 18:34:25.401990  255529 node_ready.go:35] waiting up to 5m0s for node "calico-20220629182253-10091" to be "Ready" ...
	I0629 18:34:25.407268  255529 node_ready.go:49] node "calico-20220629182253-10091" has status "Ready":"True"
	I0629 18:34:25.407294  255529 node_ready.go:38] duration metric: took 5.279722ms waiting for node "calico-20220629182253-10091" to be "Ready" ...
	I0629 18:34:25.407304  255529 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 18:34:25.411786  255529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/calico-20220629182253-10091/id_rsa Username:docker}
	I0629 18:34:25.417179  255529 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace to be "Ready" ...
	I0629 18:34:25.429729  255529 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 18:34:25.429754  255529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 18:34:25.429808  255529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220629182253-10091
	I0629 18:34:25.487084  255529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/calico-20220629182253-10091/id_rsa Username:docker}
	I0629 18:34:25.590118  255529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 18:34:25.695301  255529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 18:34:26.900766  255529 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.500235518s)
	I0629 18:34:26.900801  255529 start.go:806] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0629 18:34:26.937276  255529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.347113342s)
	I0629 18:34:26.937357  255529 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.242012038s)
	I0629 18:34:26.939469  255529 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0629 18:34:26.941516  255529 addons.go:414] enableAddons completed in 1.643958162s
	I0629 18:34:27.430519  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:34:29.928868  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:34:31.933963  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:34:34.429389  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:34:36.931152  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:34:39.438888  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:34:41.931324  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:34:44.429323  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:34:46.993882  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:34:49.430317  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:34:51.929870  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:34:54.428399  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:34:56.429149  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:34:58.930691  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:01.429365  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:03.429536  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:05.430051  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:07.430283  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:09.928338  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:11.929623  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:14.429426  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:16.930605  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:19.429972  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:21.929769  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:24.506373  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:26.930979  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:29.430112  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:31.930502  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:33.930540  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:36.429316  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:38.930040  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:41.429463  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:43.929831  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:46.429731  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:48.928990  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:51.429155  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:53.430127  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:55.929773  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:35:57.930174  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:00.428442  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:02.429826  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:04.928759  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:06.929595  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:09.429133  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:11.429262  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:13.928999  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:16.428745  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:18.429646  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:20.929121  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:22.929845  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:25.429556  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:27.929275  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:30.428747  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:32.429310  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:34.929146  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:36.929250  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:39.429392  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:41.929282  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:43.929685  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:45.930093  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:48.429707  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:50.929414  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:52.929481  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:55.429887  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:36:57.929776  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:00.429184  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:02.429609  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:04.929552  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:07.429439  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:09.429794  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:11.929204  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:13.929687  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:16.429904  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:18.929514  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:20.929588  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:23.428929  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:25.429987  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:27.929456  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:30.431101  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:32.929139  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:35.428989  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:37.429662  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:39.930913  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:42.430024  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:44.929221  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:47.430143  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:49.928325  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:51.930247  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:54.430031  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:56.929328  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:37:58.929573  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:01.429312  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:03.929176  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:06.428817  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:08.429021  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:10.429460  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:12.929970  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:15.429339  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:17.429516  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:19.429606  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:21.929384  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:23.931044  255529 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:25.432490  255529 pod_ready.go:81] duration metric: took 4m0.015280081s waiting for pod "calico-kube-controllers-c44b4545-tdhx9" in "kube-system" namespace to be "Ready" ...
	E0629 18:38:25.432511  255529 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0629 18:38:25.432519  255529 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-hx9h4" in "kube-system" namespace to be "Ready" ...
	I0629 18:38:27.443391  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:29.443743  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:31.942911  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:33.943669  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:36.443331  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:38.942161  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:40.942411  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:42.944104  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:44.944238  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:47.443146  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:49.445474  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:51.942688  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:54.443313  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:56.943760  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:38:59.442293  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:01.943483  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:04.443456  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:06.443511  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:08.943146  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:11.443164  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:13.942840  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:16.443650  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:18.943388  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:20.943463  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:23.443887  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:25.942674  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:27.943757  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:30.444375  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:32.943617  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:35.445033  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:37.942918  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:40.444679  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:42.942765  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:44.943359  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:46.943440  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:49.443175  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:51.942471  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:54.443707  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:56.942699  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:39:59.443915  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:01.943036  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:03.943126  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:06.443700  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:08.942400  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:11.443402  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:13.942555  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:16.443496  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:18.443785  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:20.942683  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:23.442984  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:25.443516  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:27.443667  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:29.942375  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:31.942804  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:34.443820  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:36.943364  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:39.444144  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:41.943605  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:44.443414  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:46.942837  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:48.943711  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:50.944278  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:53.443959  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:55.943076  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:40:57.943145  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:00.443908  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:02.942588  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:04.943518  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:07.443503  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:09.941961  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:11.943913  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:14.444107  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:16.942699  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:19.443647  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:21.942781  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:24.442483  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:26.443419  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:28.942372  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:30.942525  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:32.943517  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:35.442818  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:37.443208  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:39.444139  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:41.943410  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:44.442723  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:46.444561  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:48.942397  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:50.943977  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:53.442816  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:55.443840  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:41:57.943006  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:42:00.441991  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:42:02.443383  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:42:04.943055  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:42:06.946337  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:42:09.444013  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:42:11.942728  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:42:13.943419  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:42:16.444016  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:42:18.445620  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:42:20.942967  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:42:22.943788  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:42:25.442831  255529 pod_ready.go:102] pod "calico-node-hx9h4" in "kube-system" namespace has status "Ready":"False"
	I0629 18:42:25.447620  255529 pod_ready.go:81] duration metric: took 4m0.015091479s waiting for pod "calico-node-hx9h4" in "kube-system" namespace to be "Ready" ...
	E0629 18:42:25.447639  255529 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0629 18:42:25.447652  255529 pod_ready.go:38] duration metric: took 8m0.040336438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 18:42:25.449848  255529 out.go:177] 
	W0629 18:42:25.451289  255529 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0629 18:42:25.451309  255529 out.go:239] * 
	* 
	W0629 18:42:25.452063  255529 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 18:42:25.453543  255529 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (517.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (365.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:36:13.869559   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125084912s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:36:23.405664   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.116878831s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.121354682s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0629 18:36:54.830590   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.120701773s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:37:23.145837   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128479016s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:37:37.401047   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:37:37.406297   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:37:37.416554   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:37:37.436950   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:37:37.477335   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:37:37.558125   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:37:37.719164   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:37:38.039762   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:37:38.680040   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:37:39.961249   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:37:42.521970   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:37:45.326240   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:37:47.642637   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.124923996s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0629 18:37:57.883036   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.116606226s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0629 18:38:15.454624   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:38:16.750815   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
E0629 18:38:18.363835   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:38:28.816700   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
E0629 18:38:28.821906   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
E0629 18:38:28.832135   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
E0629 18:38:28.852379   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
E0629 18:38:28.892625   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
E0629 18:38:28.972955   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
E0629 18:38:29.133334   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
E0629 18:38:29.454017   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:38:30.095103   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
E0629 18:38:31.376053   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
E0629 18:38:33.936249   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
E0629 18:38:39.056440   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125653838s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0629 18:38:49.296717   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
E0629 18:38:59.323999   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:39:09.776977   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
E0629 18:39:20.185815   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:20.191071   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:20.201285   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:20.221530   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:20.261804   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:20.342114   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:20.502488   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:20.823076   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:21.464242   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:22.744581   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.120928567s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0629 18:39:25.304957   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:30.426052   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.111810351s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.121514414s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127758776s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (365.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (347.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:39:50.737425   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
E0629 18:39:58.926723   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:58.931994   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:58.942253   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:58.962501   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:59.002811   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:59.083150   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:59.243458   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:39:59.564256   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:40:00.205188   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:40:01.147924   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
E0629 18:40:01.482803   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:40:01.485902   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133709254s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:40:04.046469   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:40:09.167038   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:40:12.198542   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122926321s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0629 18:40:19.407605   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:40:21.244181   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:40:26.195177   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
E0629 18:40:29.166748   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:40:32.908035   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.114203025s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0629 18:40:37.754402   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:40:39.887778   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:40:42.109012   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127894603s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:41:00.591295   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.119599358s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0629 18:41:12.658400   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:41:20.848975   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.120352196s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123573585s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123602764s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0629 18:42:23.145275   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:42:37.401294   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
E0629 18:42:42.769998   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.117093774s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0629 18:43:05.085112   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629182651-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.107543242s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0629 18:43:28.816987   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:43:56.499183   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629182252-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.116896032s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0629 18:44:20.185863   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
E0629 18:44:47.869870   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
E0629 18:44:58.926180   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:45:01.483353   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:45:12.198629   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
E0629 18:45:26.610630   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629182253-10091/client.crt: no such file or directory
E0629 18:45:32.907665   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126913307s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (347.25s)

                                                
                                    

Test pass (247/275)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 15.1
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.24.2/json-events 4.92
11 TestDownloadOnly/v1.24.2/preload-exists 0
15 TestDownloadOnly/v1.24.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.31
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.2
18 TestDownloadOnlyKic 2.71
19 TestBinaryMirror 0.87
20 TestOffline 74.51
22 TestAddons/Setup 135.92
24 TestAddons/parallel/Registry 17.57
25 TestAddons/parallel/Ingress 20.2
26 TestAddons/parallel/MetricsServer 5.45
27 TestAddons/parallel/HelmTiller 17.24
29 TestAddons/parallel/CSI 39.74
30 TestAddons/parallel/Headlamp 8.92
32 TestAddons/serial/GCPAuth 35.55
33 TestAddons/StoppedEnableDisable 20.28
34 TestCertOptions 43.48
35 TestCertExpiration 233.57
37 TestForceSystemdFlag 36.3
38 TestForceSystemdEnv 33.49
39 TestKVMDriverInstallOrUpdate 3.67
43 TestErrorSpam/setup 23.42
44 TestErrorSpam/start 0.94
45 TestErrorSpam/status 1.11
46 TestErrorSpam/pause 1.55
47 TestErrorSpam/unpause 1.57
48 TestErrorSpam/stop 20.31
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 55.5
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 15.46
55 TestFunctional/serial/KubeContext 0.04
56 TestFunctional/serial/KubectlGetPods 0.17
59 TestFunctional/serial/CacheCmd/cache/add_remote 3.07
60 TestFunctional/serial/CacheCmd/cache/add_local 1.91
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
62 TestFunctional/serial/CacheCmd/cache/list 0.06
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
64 TestFunctional/serial/CacheCmd/cache/cache_reload 2.06
65 TestFunctional/serial/CacheCmd/cache/delete 0.13
66 TestFunctional/serial/MinikubeKubectlCmd 0.11
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
68 TestFunctional/serial/ExtraConfig 36.79
69 TestFunctional/serial/ComponentHealth 0.06
70 TestFunctional/serial/LogsCmd 1.06
73 TestFunctional/parallel/ConfigCmd 0.47
74 TestFunctional/parallel/DashboardCmd 28.66
75 TestFunctional/parallel/DryRun 0.56
76 TestFunctional/parallel/InternationalLanguage 0.24
77 TestFunctional/parallel/StatusCmd 1.26
80 TestFunctional/parallel/ServiceCmd 8.48
81 TestFunctional/parallel/ServiceCmdConnect 10.58
82 TestFunctional/parallel/AddonsCmd 0.17
83 TestFunctional/parallel/PersistentVolumeClaim 25.86
85 TestFunctional/parallel/SSHCmd 0.86
86 TestFunctional/parallel/CpCmd 1.42
87 TestFunctional/parallel/MySQL 25.36
88 TestFunctional/parallel/FileSync 0.35
89 TestFunctional/parallel/CertSync 2.36
93 TestFunctional/parallel/NodeLabels 0.07
95 TestFunctional/parallel/NonActiveRuntimeDisabled 0.88
97 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
98 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
99 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
100 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
101 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.43
102 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
103 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.47
104 TestFunctional/parallel/Version/short 0.09
105 TestFunctional/parallel/ImageCommands/ImageBuild 4.63
106 TestFunctional/parallel/Version/components 1.07
107 TestFunctional/parallel/ImageCommands/Setup 1.09
108 TestFunctional/parallel/ProfileCmd/profile_not_create 0.62
110 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.3
113 TestFunctional/parallel/ProfileCmd/profile_list 0.7
114 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.02
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.17
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.48
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.86
125 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
126 TestFunctional/parallel/MountCmd/any-port 8.89
127 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.08
128 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.33
129 TestFunctional/parallel/MountCmd/specific-port 2.34
130 TestFunctional/delete_addon-resizer_images 0.1
131 TestFunctional/delete_my-image_image 0.03
132 TestFunctional/delete_minikube_cached_images 0.03
135 TestIngressAddonLegacy/StartLegacyK8sCluster 67.91
137 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.62
138 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.42
139 TestIngressAddonLegacy/serial/ValidateIngressAddons 40.38
142 TestJSONOutput/start/Command 56.93
143 TestJSONOutput/start/Audit 0
145 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/pause/Command 0.67
149 TestJSONOutput/pause/Audit 0
151 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/unpause/Command 0.61
155 TestJSONOutput/unpause/Audit 0
157 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/stop/Command 20.16
161 TestJSONOutput/stop/Audit 0
163 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
165 TestErrorJSONOutput 0.29
167 TestKicCustomNetwork/create_custom_network 35.7
168 TestKicCustomNetwork/use_default_bridge_network 28.62
169 TestKicExistingNetwork 29.45
170 TestKicCustomSubnet 28.86
171 TestMainNoArgs 0.06
172 TestMinikubeProfile 52.5
175 TestMountStart/serial/StartWithMountFirst 4.91
176 TestMountStart/serial/VerifyMountFirst 0.34
177 TestMountStart/serial/StartWithMountSecond 4.92
178 TestMountStart/serial/VerifyMountSecond 0.34
179 TestMountStart/serial/DeleteFirst 1.8
180 TestMountStart/serial/VerifyMountPostDelete 0.35
181 TestMountStart/serial/Stop 1.28
182 TestMountStart/serial/RestartStopped 6.65
183 TestMountStart/serial/VerifyMountPostStop 0.34
186 TestMultiNode/serial/FreshStart2Nodes 81.17
187 TestMultiNode/serial/DeployApp2Nodes 4.1
188 TestMultiNode/serial/PingHostFrom2Pods 0.86
189 TestMultiNode/serial/AddNode 34.54
190 TestMultiNode/serial/ProfileList 0.37
191 TestMultiNode/serial/CopyFile 12.06
192 TestMultiNode/serial/StopNode 2.45
193 TestMultiNode/serial/StartAfterStop 31.39
194 TestMultiNode/serial/RestartKeepsNodes 157.64
195 TestMultiNode/serial/DeleteNode 5.06
196 TestMultiNode/serial/StopMultiNode 40.28
197 TestMultiNode/serial/RestartMultiNode 80.64
198 TestMultiNode/serial/ValidateNameConflict 26.41
203 TestPreload 114.43
205 TestScheduledStopUnix 101.27
208 TestInsufficientStorage 15.97
209 TestRunningBinaryUpgrade 105.5
212 TestMissingContainerUpgrade 158.3
214 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
215 TestStoppedBinaryUpgrade/Setup 0.56
216 TestNoKubernetes/serial/StartWithK8s 60.67
217 TestStoppedBinaryUpgrade/Upgrade 120.87
218 TestNoKubernetes/serial/StartWithStopK8s 22.79
219 TestNoKubernetes/serial/Start 9.74
220 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
221 TestNoKubernetes/serial/ProfileList 7.59
222 TestNoKubernetes/serial/Stop 1.37
223 TestNoKubernetes/serial/StartNoArgs 5.81
224 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
232 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
234 TestPause/serial/Start 54.24
235 TestPause/serial/SecondStartNoReconfiguration 15.78
243 TestNetworkPlugins/group/false 0.51
247 TestPause/serial/Pause 0.74
248 TestPause/serial/VerifyStatus 0.38
249 TestPause/serial/Unpause 0.66
250 TestPause/serial/PauseAgain 0.84
251 TestPause/serial/DeletePaused 5.98
252 TestPause/serial/VerifyDeletedResources 5.73
254 TestStartStop/group/old-k8s-version/serial/FirstStart 106.06
256 TestStartStop/group/no-preload/serial/FirstStart 60.61
257 TestStartStop/group/no-preload/serial/DeployApp 9.3
258 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.55
259 TestStartStop/group/no-preload/serial/Stop 20.14
260 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
261 TestStartStop/group/no-preload/serial/SecondStart 323.8
262 TestStartStop/group/old-k8s-version/serial/DeployApp 7.31
263 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.55
264 TestStartStop/group/old-k8s-version/serial/Stop 20.12
265 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
266 TestStartStop/group/old-k8s-version/serial/SecondStart 419.69
268 TestStartStop/group/default-k8s-different-port/serial/FirstStart 45.98
269 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.39
270 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.66
271 TestStartStop/group/default-k8s-different-port/serial/Stop 20.1
272 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.21
273 TestStartStop/group/default-k8s-different-port/serial/SecondStart 320.93
275 TestStartStop/group/newest-cni/serial/FirstStart 47.53
276 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
277 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
278 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.38
279 TestStartStop/group/no-preload/serial/Pause 3.29
280 TestStartStop/group/newest-cni/serial/DeployApp 0
281 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.64
282 TestStartStop/group/newest-cni/serial/Stop 20.15
284 TestStartStop/group/embed-certs/serial/FirstStart 57.87
285 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
286 TestStartStop/group/newest-cni/serial/SecondStart 30.56
287 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
288 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
289 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.41
290 TestStartStop/group/newest-cni/serial/Pause 3.34
291 TestNetworkPlugins/group/auto/Start 80.14
292 TestStartStop/group/embed-certs/serial/DeployApp 8.3
293 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.65
294 TestStartStop/group/embed-certs/serial/Stop 20.17
295 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
296 TestStartStop/group/embed-certs/serial/SecondStart 557.08
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.47
300 TestStartStop/group/old-k8s-version/serial/Pause 3.18
301 TestNetworkPlugins/group/kindnet/Start 62.99
302 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.01
303 TestNetworkPlugins/group/auto/KubeletFlags 0.39
304 TestNetworkPlugins/group/auto/NetCatPod 8.33
305 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.07
306 TestNetworkPlugins/group/auto/DNS 0.13
307 TestNetworkPlugins/group/auto/Localhost 0.12
308 TestNetworkPlugins/group/auto/HairPin 0.13
309 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.43
310 TestStartStop/group/default-k8s-different-port/serial/Pause 3.46
311 TestNetworkPlugins/group/cilium/Start 79.02
313 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
314 TestNetworkPlugins/group/kindnet/KubeletFlags 0.53
315 TestNetworkPlugins/group/kindnet/NetCatPod 9.28
316 TestNetworkPlugins/group/kindnet/DNS 0.14
317 TestNetworkPlugins/group/kindnet/Localhost 0.13
318 TestNetworkPlugins/group/kindnet/HairPin 0.13
319 TestNetworkPlugins/group/enable-default-cni/Start 300.57
320 TestNetworkPlugins/group/cilium/ControllerPod 5.02
321 TestNetworkPlugins/group/cilium/KubeletFlags 0.37
322 TestNetworkPlugins/group/cilium/NetCatPod 9.84
323 TestNetworkPlugins/group/cilium/DNS 0.13
324 TestNetworkPlugins/group/cilium/Localhost 0.13
325 TestNetworkPlugins/group/cilium/HairPin 0.11
326 TestNetworkPlugins/group/bridge/Start 36.86
327 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
328 TestNetworkPlugins/group/bridge/NetCatPod 11.23
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.18
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.4
336 TestStartStop/group/embed-certs/serial/Pause 3.1
x
+
TestDownloadOnly/v1.16.0/json-events (15.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220629175257-10091 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220629175257-10091 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (15.0967243s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (15.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220629175257-10091
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220629175257-10091: exit status 85 (77.328586ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|----------|---------|---------|---------------------|----------|
	| Command |                Args                | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | minikube | jenkins | v1.26.0 | 29 Jun 22 17:52 UTC |          |
	|         | download-only-20220629175257-10091 |          |         |         |                     |          |
	|         | --force --alsologtostderr          |          |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |          |         |         |                     |          |
	|         | --container-runtime=containerd     |          |         |         |                     |          |
	|         | --driver=docker                    |          |         |         |                     |          |
	|         | --container-runtime=containerd     |          |         |         |                     |          |
	|---------|------------------------------------|----------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 17:52:57
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 17:52:57.414068   10104 out.go:296] Setting OutFile to fd 1 ...
	I0629 17:52:57.414158   10104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 17:52:57.414167   10104 out.go:309] Setting ErrFile to fd 2...
	I0629 17:52:57.414171   10104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 17:52:57.414586   10104 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	W0629 17:52:57.414704   10104 root.go:307] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/config/config.json: no such file or directory
	I0629 17:52:57.414934   10104 out.go:303] Setting JSON to true
	I0629 17:52:57.415669   10104 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2128,"bootTime":1656523050,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1033-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0629 17:52:57.415724   10104 start.go:125] virtualization: kvm guest
	I0629 17:52:57.418282   10104 out.go:97] [download-only-20220629175257-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	W0629 17:52:57.418386   10104 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball: no such file or directory
	I0629 17:52:57.420004   10104 out.go:169] MINIKUBE_LOCATION=14420
	I0629 17:52:57.418428   10104 notify.go:193] Checking for updates...
	I0629 17:52:57.422975   10104 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 17:52:57.424671   10104 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 17:52:57.426255   10104 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 17:52:57.427860   10104 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0629 17:52:57.430643   10104 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0629 17:52:57.430862   10104 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 17:52:57.466418   10104 docker.go:137] docker version: linux-20.10.17
	I0629 17:52:57.466486   10104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 17:52:58.181369   10104 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:34 SystemTime:2022-06-29 17:52:57.49151401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1033-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 17:52:58.181477   10104 docker.go:254] overlay module found
	I0629 17:52:58.183331   10104 out.go:97] Using the docker driver based on user configuration
	I0629 17:52:58.183351   10104 start.go:284] selected driver: docker
	I0629 17:52:58.183356   10104 start.go:808] validating driver "docker" against <nil>
	I0629 17:52:58.183428   10104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 17:52:58.286229   10104 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:34 SystemTime:2022-06-29 17:52:58.208591804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1033-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 17:52:58.286333   10104 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0629 17:52:58.286757   10104 start_flags.go:377] Using suggested 8000MB memory alloc based on sys=32103MB, container=32103MB
	I0629 17:52:58.286878   10104 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0629 17:52:58.288757   10104 out.go:169] Using Docker driver with root privileges
	I0629 17:52:58.289954   10104 cni.go:95] Creating CNI manager for ""
	I0629 17:52:58.289976   10104 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0629 17:52:58.289989   10104 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0629 17:52:58.289995   10104 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0629 17:52:58.290000   10104 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni
	I0629 17:52:58.290020   10104 start_flags.go:310] config:
	{Name:download-only-20220629175257-10091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220629175257-10091 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 17:52:58.291289   10104 out.go:97] Starting control plane node download-only-20220629175257-10091 in cluster download-only-20220629175257-10091
	I0629 17:52:58.291307   10104 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0629 17:52:58.292507   10104 out.go:97] Pulling base image ...
	I0629 17:52:58.292536   10104 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0629 17:52:58.292626   10104 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 17:52:58.319719   10104 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e to local cache
	I0629 17:52:58.320000   10104 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local cache directory
	I0629 17:52:58.320099   10104 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e to local cache
	I0629 17:52:58.345831   10104 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0629 17:52:58.345851   10104 cache.go:57] Caching tarball of preloaded images
	I0629 17:52:58.345984   10104 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0629 17:52:58.347715   10104 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0629 17:52:58.347733   10104 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0629 17:52:58.417564   10104 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0629 17:53:00.663038   10104 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0629 17:53:00.663111   10104 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0629 17:53:01.514706   10104 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0629 17:53:01.515032   10104 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/download-only-20220629175257-10091/config.json ...
	I0629 17:53:01.515079   10104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/download-only-20220629175257-10091/config.json: {Name:mkf33b2d1051cf542c4c69f7907cefe7fee5d82c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 17:53:01.515259   10104 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0629 17:53:01.515461   10104 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220629175257-10091"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/json-events (4.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220629175257-10091 --force --alsologtostderr --kubernetes-version=v1.24.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220629175257-10091 --force --alsologtostderr --kubernetes-version=v1.24.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.919791012s)
--- PASS: TestDownloadOnly/v1.24.2/json-events (4.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/preload-exists
--- PASS: TestDownloadOnly/v1.24.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220629175257-10091
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220629175257-10091: exit status 85 (78.00565ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|----------|---------|---------|---------------------|----------|
	| Command |                Args                | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | minikube | jenkins | v1.26.0 | 29 Jun 22 17:52 UTC |          |
	|         | download-only-20220629175257-10091 |          |         |         |                     |          |
	|         | --force --alsologtostderr          |          |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |          |         |         |                     |          |
	|         | --container-runtime=containerd     |          |         |         |                     |          |
	|         | --driver=docker                    |          |         |         |                     |          |
	|         | --container-runtime=containerd     |          |         |         |                     |          |
	| start   | -o=json --download-only -p         | minikube | jenkins | v1.26.0 | 29 Jun 22 17:53 UTC |          |
	|         | download-only-20220629175257-10091 |          |         |         |                     |          |
	|         | --force --alsologtostderr          |          |         |         |                     |          |
	|         | --kubernetes-version=v1.24.2       |          |         |         |                     |          |
	|         | --container-runtime=containerd     |          |         |         |                     |          |
	|         | --driver=docker                    |          |         |         |                     |          |
	|         | --container-runtime=containerd     |          |         |         |                     |          |
	|---------|------------------------------------|----------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 17:53:12
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 17:53:12.588866   10269 out.go:296] Setting OutFile to fd 1 ...
	I0629 17:53:12.588983   10269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 17:53:12.588993   10269 out.go:309] Setting ErrFile to fd 2...
	I0629 17:53:12.588997   10269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 17:53:12.589401   10269 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	W0629 17:53:12.589540   10269 root.go:307] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/config/config.json: no such file or directory
	I0629 17:53:12.589665   10269 out.go:303] Setting JSON to true
	I0629 17:53:12.590543   10269 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2143,"bootTime":1656523050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1033-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0629 17:53:12.590600   10269 start.go:125] virtualization: kvm guest
	I0629 17:53:12.592801   10269 out.go:97] [download-only-20220629175257-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0629 17:53:12.592918   10269 notify.go:193] Checking for updates...
	I0629 17:53:12.594378   10269 out.go:169] MINIKUBE_LOCATION=14420
	I0629 17:53:12.595802   10269 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 17:53:12.597238   10269 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 17:53:12.598516   10269 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 17:53:12.599844   10269 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220629175257-10091"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.24.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.31s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220629175257-10091
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.71s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220629175318-10091 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220629175318-10091 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (1.656804552s)
helpers_test.go:175: Cleaning up "download-docker-20220629175318-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220629175318-10091
--- PASS: TestDownloadOnlyKic (2.71s)

                                                
                                    
x
+
TestBinaryMirror (0.87s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220629175320-10091 --alsologtostderr --binary-mirror http://127.0.0.1:39303 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-20220629175320-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220629175320-10091
--- PASS: TestBinaryMirror (0.87s)

                                                
                                    
x
+
TestOffline (74.51s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20220629181940-10091 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20220629181940-10091 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m12.184629415s)
helpers_test.go:175: Cleaning up "offline-containerd-20220629181940-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20220629181940-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220629181940-10091: (2.326579151s)
--- PASS: TestOffline (74.51s)

                                                
                                    
x
+
TestAddons/Setup (135.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220629175321-10091 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220629175321-10091 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m15.918955722s)
--- PASS: TestAddons/Setup (135.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 9.583348ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-9txt7" [9136999f-ee86-43ea-81b6-4bf9a02c8dd8] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007432652s
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-g7gbv" [7d5cc498-7132-450f-ad12-aee0bdb44507] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009936603s
addons_test.go:292: (dbg) Run:  kubectl --context addons-20220629175321-10091 delete po -l run=registry-test --now
addons_test.go:297: (dbg) Run:  kubectl --context addons-20220629175321-10091 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-20220629175321-10091 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.776728799s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220629175321-10091 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:340: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220629175321-10091 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.57s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-20220629175321-10091 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:184: (dbg) Run:  kubectl --context addons-20220629175321-10091 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:197: (dbg) Run:  kubectl --context addons-20220629175321-10091 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [d0decbe9-46a0-4ced-a8c9-0fa6138bbf2a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [d0decbe9-46a0-4ced-a8c9-0fa6138bbf2a] Running
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.006469152s
addons_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220629175321-10091 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Run:  kubectl --context addons-20220629175321-10091 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220629175321-10091 ip
addons_test.go:249: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220629175321-10091 addons disable ingress-dns --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220629175321-10091 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p addons-20220629175321-10091 addons disable ingress --alsologtostderr -v=1: (7.545811428s)
--- PASS: TestAddons/parallel/Ingress (20.20s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 2.011274ms
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-8595bd7d4c-7jzkd" [bbdaf4fd-2c97-41cb-9652-1ce480293f82] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007615519s
addons_test.go:367: (dbg) Run:  kubectl --context addons-20220629175321-10091 top pods -n kube-system
addons_test.go:384: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220629175321-10091 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.45s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (17.24s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 9.538784ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-c7d76457b-9sf6g" [7094b0a7-7e63-4491-ae66-afc7b958fc9d] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008288871s
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220629175321-10091 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-20220629175321-10091 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (11.814939618s)
addons_test.go:430: kubectl --context addons-20220629175321-10091 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:442: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220629175321-10091 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (17.24s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 11.702004ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-20220629175321-10091 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220629175321-10091 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220629175321-10091 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-20220629175321-10091 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [9073874f-ca8b-4b1f-bda3-703ed8e44c2a] Pending
helpers_test.go:342: "task-pv-pod" [9073874f-ca8b-4b1f-bda3-703ed8e44c2a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [9073874f-ca8b-4b1f-bda3-703ed8e44c2a] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005615381s
addons_test.go:536: (dbg) Run:  kubectl --context addons-20220629175321-10091 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220629175321-10091 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220629175321-10091 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-20220629175321-10091 delete pod task-pv-pod
addons_test.go:552: (dbg) Run:  kubectl --context addons-20220629175321-10091 delete pvc hpvc

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:558: (dbg) Run:  kubectl --context addons-20220629175321-10091 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220629175321-10091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2022/06/29 17:55:54 [DEBUG] GET http://192.168.49.2:5000

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:568: (dbg) Run:  kubectl --context addons-20220629175321-10091 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [4f496c46-97f4-444b-865d-a68e70c46e54] Pending
helpers_test.go:342: "task-pv-pod-restore" [4f496c46-97f4-444b-865d-a68e70c46e54] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [4f496c46-97f4-444b-865d-a68e70c46e54] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 14.007027525s
addons_test.go:578: (dbg) Run:  kubectl --context addons-20220629175321-10091 delete pod task-pv-pod-restore
addons_test.go:582: (dbg) Run:  kubectl --context addons-20220629175321-10091 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-20220629175321-10091 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220629175321-10091 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:590: (dbg) Done: out/minikube-linux-amd64 -p addons-20220629175321-10091 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.972380212s)
addons_test.go:594: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220629175321-10091 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.74s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (8.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-20220629175321-10091 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-866f5bd7bc-nm8lg" [592eddd6-a918-412b-bde0-de09bbaa1450] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-866f5bd7bc-nm8lg" [592eddd6-a918-412b-bde0-de09bbaa1450] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-866f5bd7bc-nm8lg" [592eddd6-a918-412b-bde0-de09bbaa1450] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 8.063628933s
--- PASS: TestAddons/parallel/Headlamp (8.92s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (35.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-20220629175321-10091 create -f testdata/busybox.yaml
addons_test.go:612: (dbg) Run:  kubectl --context addons-20220629175321-10091 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [96aeae92-2d95-44e2-8cae-ea5b4a658070] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [96aeae92-2d95-44e2-8cae-ea5b4a658070] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.005661879s
addons_test.go:624: (dbg) Run:  kubectl --context addons-20220629175321-10091 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-20220629175321-10091 describe sa gcp-auth-test
addons_test.go:674: (dbg) Run:  kubectl --context addons-20220629175321-10091 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220629175321-10091 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-linux-amd64 -p addons-20220629175321-10091 addons disable gcp-auth --alsologtostderr -v=1: (6.152533783s)
addons_test.go:703: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220629175321-10091 addons enable gcp-auth
addons_test.go:703: (dbg) Done: out/minikube-linux-amd64 -p addons-20220629175321-10091 addons enable gcp-auth: (2.150770944s)
addons_test.go:709: (dbg) Run:  kubectl --context addons-20220629175321-10091 apply -f testdata/private-image.yaml
addons_test.go:716: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7c74db7cd9-x864q" [91079c30-3a62-485b-8214-9df6bc4bc129] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7c74db7cd9-x864q" [91079c30-3a62-485b-8214-9df6bc4bc129] Running
addons_test.go:716: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 10.005675558s
addons_test.go:722: (dbg) Run:  kubectl --context addons-20220629175321-10091 apply -f testdata/private-image-eu.yaml
addons_test.go:727: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-545d57c67f-rf47x" [5b4e9ef3-b1dd-492e-a210-56adb55951ff] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-545d57c67f-rf47x" [5b4e9ef3-b1dd-492e-a210-56adb55951ff] Running
addons_test.go:727: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 8.007248199s
--- PASS: TestAddons/serial/GCPAuth (35.55s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220629175321-10091
addons_test.go:134: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220629175321-10091: (20.089178155s)
addons_test.go:138: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220629175321-10091
addons_test.go:142: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220629175321-10091
--- PASS: TestAddons/StoppedEnableDisable (20.28s)

                                                
                                    
x
+
TestCertOptions (43.48s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220629182317-10091 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220629182317-10091 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (40.309372843s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220629182317-10091 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220629182317-10091 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220629182317-10091 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220629182317-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220629182317-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220629182317-10091: (2.345757119s)
--- PASS: TestCertOptions (43.48s)

                                                
                                    
x
+
TestCertExpiration (233.57s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220629182257-10091 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220629182257-10091 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (36.57656714s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220629182257-10091 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220629182257-10091 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (14.690093868s)
helpers_test.go:175: Cleaning up "cert-expiration-20220629182257-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220629182257-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220629182257-10091: (2.306575461s)
--- PASS: TestCertExpiration (233.57s)

                                                
                                    
x
+
TestForceSystemdFlag (36.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220629182310-10091 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220629182310-10091 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.386727092s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220629182310-10091 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220629182310-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220629182310-10091
E0629 18:23:46.194081   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220629182310-10091: (2.52237079s)
--- PASS: TestForceSystemdFlag (36.30s)

                                                
                                    
x
+
TestForceSystemdEnv (33.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220629182219-10091 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0629 18:22:23.145816   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220629182219-10091 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (28.651284197s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220629182219-10091 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-20220629182219-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220629182219-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220629182219-10091: (4.415280506s)
--- PASS: TestForceSystemdEnv (33.49s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.67s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.67s)

                                                
                                    
x
+
TestErrorSpam/setup (23.42s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220629175719-10091 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220629175719-10091 --driver=docker  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220629175719-10091 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220629175719-10091 --driver=docker  --container-runtime=containerd: (23.424496873s)
--- PASS: TestErrorSpam/setup (23.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 start --dry-run
--- PASS: TestErrorSpam/start (0.94s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 unpause
--- PASS: TestErrorSpam/unpause (1.57s)

                                                
                                    
x
+
TestErrorSpam/stop (20.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 stop: (20.051011439s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220629175719-10091 --log_dir /tmp/nospam-20220629175719-10091 stop
--- PASS: TestErrorSpam/stop (20.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/test/nested/copy/10091/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.5s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220629175813-10091 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2160: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220629175813-10091 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (55.496480335s)
--- PASS: TestFunctional/serial/StartWithProxy (55.50s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220629175813-10091 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220629175813-10091 --alsologtostderr -v=8: (15.462264004s)
functional_test.go:655: soft start took 15.462913756s for "functional-20220629175813-10091" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220629175813-10091 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220629175813-10091 cache add k8s.gcr.io/pause:3.1: (1.146877002s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220629175813-10091 cache add k8s.gcr.io/pause:3.3: (1.096844273s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220629175813-10091 /tmp/TestFunctionalserialCacheCmdcacheadd_local43843303/001
functional_test.go:1081: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 cache add minikube-local-cache-test:functional-20220629175813-10091
functional_test.go:1081: (dbg) Done: out/minikube-linux-amd64 -p functional-20220629175813-10091 cache add minikube-local-cache-test:functional-20220629175813-10091: (1.649393197s)
functional_test.go:1086: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 cache delete minikube-local-cache-test:functional-20220629175813-10091
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220629175813-10091
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (345.197001ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-linux-amd64 -p functional-20220629175813-10091 cache reload: (1.007714082s)
functional_test.go:1155: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 kubectl -- --context functional-20220629175813-10091 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220629175813-10091 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.79s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220629175813-10091 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220629175813-10091 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.789885619s)
functional_test.go:753: restart took 36.790006551s for "functional-20220629175813-10091" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.79s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220629175813-10091 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 logs
functional_test.go:1228: (dbg) Done: out/minikube-linux-amd64 -p functional-20220629175813-10091 logs: (1.057291266s)
--- PASS: TestFunctional/serial/LogsCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220629175813-10091 config get cpus: exit status 14 (76.129862ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220629175813-10091 config get cpus: exit status 14 (79.772847ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220629175813-10091 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220629175813-10091 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 45662: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220629175813-10091 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:966: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220629175813-10091 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (245.669007ms)

                                                
                                                
-- stdout --
	* [functional-20220629175813-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 18:00:32.690962   45176 out.go:296] Setting OutFile to fd 1 ...
	I0629 18:00:32.691096   45176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:00:32.691107   45176 out.go:309] Setting ErrFile to fd 2...
	I0629 18:00:32.691114   45176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:00:32.691517   45176 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 18:00:32.691762   45176 out.go:303] Setting JSON to false
	I0629 18:00:32.692914   45176 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2583,"bootTime":1656523050,"procs":545,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1033-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0629 18:00:32.692976   45176 start.go:125] virtualization: kvm guest
	I0629 18:00:32.695493   45176 out.go:177] * [functional-20220629175813-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0629 18:00:32.696790   45176 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 18:00:32.698095   45176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 18:00:32.699378   45176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 18:00:32.700638   45176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 18:00:32.702002   45176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0629 18:00:32.703565   45176 config.go:178] Loaded profile config "functional-20220629175813-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:00:32.703931   45176 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 18:00:32.745109   45176 docker.go:137] docker version: linux-20.10.17
	I0629 18:00:32.745215   45176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:00:32.867192   45176 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:40 SystemTime:2022-06-29 18:00:32.794127888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1033-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:00:32.867297   45176 docker.go:254] overlay module found
	I0629 18:00:32.869467   45176 out.go:177] * Using the docker driver based on existing profile
	I0629 18:00:32.870795   45176 start.go:284] selected driver: docker
	I0629 18:00:32.870809   45176 start.go:808] validating driver "docker" against &{Name:functional-20220629175813-10091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220629175813-10091 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-s
ecurity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:00:32.870949   45176 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 18:00:32.873117   45176 out.go:177] 
	W0629 18:00:32.874294   45176 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0629 18:00:32.875378   45176 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220629175813-10091 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220629175813-10091 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220629175813-10091 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (235.015188ms)

                                                
                                                
-- stdout --
	* [functional-20220629175813-10091] minikube v1.26.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 18:00:27.109087   42481 out.go:296] Setting OutFile to fd 1 ...
	I0629 18:00:27.109231   42481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:00:27.109243   42481 out.go:309] Setting ErrFile to fd 2...
	I0629 18:00:27.109251   42481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:00:27.109706   42481 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 18:00:27.109959   42481 out.go:303] Setting JSON to false
	I0629 18:00:27.111025   42481 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2577,"bootTime":1656523050,"procs":541,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1033-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0629 18:00:27.111089   42481 start.go:125] virtualization: kvm guest
	I0629 18:00:27.113477   42481 out.go:177] * [functional-20220629175813-10091] minikube v1.26.0 sur Ubuntu 20.04 (kvm/amd64)
	I0629 18:00:27.114824   42481 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 18:00:27.116115   42481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 18:00:27.117497   42481 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 18:00:27.118825   42481 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 18:00:27.120212   42481 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0629 18:00:27.121834   42481 config.go:178] Loaded profile config "functional-20220629175813-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:00:27.122258   42481 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 18:00:27.157193   42481 docker.go:137] docker version: linux-20.10.17
	I0629 18:00:27.157281   42481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:00:27.267007   42481 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-06-29 18:00:27.185177823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1033-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:00:27.267136   42481 docker.go:254] overlay module found
	I0629 18:00:27.269918   42481 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0629 18:00:27.271164   42481 start.go:284] selected driver: docker
	I0629 18:00:27.271181   42481 start.go:808] validating driver "docker" against &{Name:functional-20220629175813-10091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220629175813-10091 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-s
ecurity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 18:00:27.271315   42481 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 18:00:27.273657   42481 out.go:177] 
	W0629 18:00:27.274994   42481 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0629 18:00:27.276225   42481 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (8.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220629175813-10091 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220629175813-10091 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-jz8kq" [0fa79040-c88a-49c4-b5de-fc7a8bb6b9e3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54c4b5c49f-jz8kq" [0fa79040-c88a-49c4-b5de-fc7a8bb6b9e3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 6.010158394s
functional_test.go:1448: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1475: found endpoint: https://192.168.49.2:31488
functional_test.go:1490: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1510: found endpoint for hello-node: http://192.168.49.2:31488
--- PASS: TestFunctional/parallel/ServiceCmd (8.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220629175813-10091 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220629175813-10091 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-578cdc45cb-cwnwd" [d1ece4da-a94f-43e7-ac68-7d7dd9b00eab] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-578cdc45cb-cwnwd" [d1ece4da-a94f-43e7-ac68-7d7dd9b00eab] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.006174495s
functional_test.go:1578: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1584: found endpoint for hello-node-connect: http://192.168.49.2:31410
functional_test.go:1604: http://192.168.49.2:31410: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-578cdc45cb-cwnwd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31410
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 addons list
functional_test.go:1631: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [b880450a-f96b-4d26-b152-82dce2ab634d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014735282s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220629175813-10091 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220629175813-10091 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220629175813-10091 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220629175813-10091 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [e724c23c-2240-4dc7-8ad3-bd41dbc3d87a] Pending
helpers_test.go:342: "sp-pod" [e724c23c-2240-4dc7-8ad3-bd41dbc3d87a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [e724c23c-2240-4dc7-8ad3-bd41dbc3d87a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.006687697s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220629175813-10091 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220629175813-10091 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220629175813-10091 delete -f testdata/storage-provisioner/pod.yaml: (1.75073766s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220629175813-10091 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [6acfa903-4efd-48a0-ab04-45b017476417] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [6acfa903-4efd-48a0-ab04-45b017476417] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [6acfa903-4efd-48a0-ab04-45b017476417] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00644517s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220629175813-10091 exec sp-pod -- ls /tmp/mount
E0629 18:00:37.912296   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.86s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh -n functional-20220629175813-10091 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 cp functional-20220629175813-10091:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2057703279/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh -n functional-20220629175813-10091 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220629175813-10091 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-84cn8" [3883afce-a419-4d62-a8fd-cd530d518e75] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-84cn8" [3883afce-a419-4d62-a8fd-cd530d518e75] Running
E0629 18:00:47.995583   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.011336725s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629175813-10091 exec mysql-67f7d69d8b-84cn8 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220629175813-10091 exec mysql-67f7d69d8b-84cn8 -- mysql -ppassword -e "show databases;": exit status 1 (204.269123ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629175813-10091 exec mysql-67f7d69d8b-84cn8 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220629175813-10091 exec mysql-67f7d69d8b-84cn8 -- mysql -ppassword -e "show databases;": exit status 1 (156.135641ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629175813-10091 exec mysql-67f7d69d8b-84cn8 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220629175813-10091 exec mysql-67f7d69d8b-84cn8 -- mysql -ppassword -e "show databases;": exit status 1 (113.976637ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629175813-10091 exec mysql-67f7d69d8b-84cn8 -- mysql -ppassword -e "show databases;"
E0629 18:00:58.236329   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
2022/06/29 18:01:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (25.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/10091/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo cat /etc/test/nested/copy/10091/hosts"
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/10091.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo cat /etc/ssl/certs/10091.pem"
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/10091.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo cat /usr/share/ca-certificates/10091.pem"
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/100912.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo cat /etc/ssl/certs/100912.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/100912.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo cat /usr/share/ca-certificates/100912.pem"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220629175813-10091 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo systemctl is-active docker": exit status 1 (405.309597ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo systemctl is-active crio": exit status 1 (471.529337ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls --format short
E0629 18:00:39.034070   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.7
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220629175813-10091
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-20220629175813-10091
docker.io/kindest/kindnetd:v20220510-4929dd75
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls --format table
E0629 18:00:40.314293   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20220510-4929dd75              | sha256:6fb66c | 45.2MB |
| docker.io/library/minikube-local-cache-test | functional-20220629175813-10091 | sha256:a45bc6 | 1.74kB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                          | sha256:a4ca41 | 13.6MB |
| k8s.gcr.io/kube-controller-manager          | v1.24.2                         | sha256:34cdf9 | 31MB   |
| k8s.gcr.io/pause                            | 3.3                             | sha256:0184c1 | 298kB  |
| gcr.io/google-containers/addon-resizer      | functional-20220629175813-10091 | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/etcd                             | 3.5.3-0                         | sha256:aebe75 | 102MB  |
| k8s.gcr.io/kube-apiserver                   | v1.24.2                         | sha256:d3377f | 33.8MB |
| k8s.gcr.io/kube-scheduler                   | v1.24.2                         | sha256:5d7251 | 15.5MB |
| k8s.gcr.io/pause                            | latest                          | sha256:350b16 | 72.3kB |
| docker.io/library/nginx                     | latest                          | sha256:55f4b4 | 56.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/kube-proxy                       | v1.24.2                         | sha256:a63454 | 39.5MB |
| docker.io/library/nginx                     | alpine                          | sha256:f246e6 | 10.2MB |
| k8s.gcr.io/echoserver                       | 1.8                             | sha256:82e4c8 | 46.2MB |
| k8s.gcr.io/pause                            | 3.1                             | sha256:da86e6 | 315kB  |
| k8s.gcr.io/pause                            | 3.7                             | sha256:221177 | 311kB  |
|---------------------------------------------|---------------------------------|---------------|--------|
E0629 18:00:42.874635   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls --format json:
[{"id":"sha256:f246e6f9d0b28d6eb1f7e1f12791f23587c2c6aa42c82aba8d6fe6e2e2de9e95","repoDigests":["docker.io/library/nginx@sha256:8e38930f0390cbd79b2d1528405fb17edcda5f4a30875ecf338ebaa598dc994e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10190737"},{"id":"sha256:55f4b40fe486a5b734b46bb7bf28f52fa31426bf23be068c8e7b19e58d9b8deb","repoDigests":["docker.io/library/nginx@sha256:10f14ffa93f8dedf1057897b745e5ac72ac5655c299dade0aa434c71557697ea"],"repoTags":["docker.io/library/nginx:latest"],"size":"56748232"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":["k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e"],"repoTags":["k8s.gcr.io/core
dns/coredns:v1.8.6"],"size":"13585107"},{"id":"sha256:d3377ffb7177cc4becce8a534d8547aca9530cb30fac9ebe479b31102f1ba503","repoDigests":["k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a"],"repoTags":["k8s.gcr.io/kube-apiserver:v1.24.2"],"size":"33795763"},{"id":"sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165","repoDigests":["k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c"],"repoTags":["k8s.gcr.io/pause:3.7"],"size":"311278"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:aebe758cef4cd05b9f8cee397582
27714d02f42ef3088023c1e3cd454f927a2b","repoDigests":["k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5"],"repoTags":["k8s.gcr.io/etcd:3.5.3-0"],"size":"102143581"},{"id":"sha256:34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df","repoDigests":["k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753"],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.24.2"],"size":"31035052"},{"id":"sha256:6fb66cd78abfe9e0735a9a751f2586b7984e0d279e87fa8dd175781de6595627","repoDigests":["docker.io/kindest/kindnetd@sha256:39494477a3fa001aae716b704a8991f4f62d2ccf1aaaa65692da6c805b18856c"],"repoTags":["docker.io/kindest/kindnetd:v20220510-4929dd75"],"size":"45239873"},{"id":"sha256:a45bc6f966ff2ac33f13c7f9c446313da0634f9730e53a4c062f89012e0a2067","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220629175813-10091"],"size":"1738"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471
df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220629175813-10091"],"size":"10823156"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536","repoDigests":["k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f"],"repoTags":["k8s.gcr.io/kube-proxy:v1.24.2"],"size":"39515830"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3
663ac","repoDigests":["k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f"],"repoTags":["k8s.gcr.io/kube-scheduler:v1.24.2"],"size":"15488980"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls --format yaml:
- id: sha256:f246e6f9d0b28d6eb1f7e1f12791f23587c2c6aa42c82aba8d6fe6e2e2de9e95
repoDigests:
- docker.io/library/nginx@sha256:8e38930f0390cbd79b2d1528405fb17edcda5f4a30875ecf338ebaa598dc994e
repoTags:
- docker.io/library/nginx:alpine
size: "10190737"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:d3377ffb7177cc4becce8a534d8547aca9530cb30fac9ebe479b31102f1ba503
repoDigests:
- k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a
repoTags:
- k8s.gcr.io/kube-apiserver:v1.24.2
size: "33795763"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:6fb66cd78abfe9e0735a9a751f2586b7984e0d279e87fa8dd175781de6595627
repoDigests:
- docker.io/kindest/kindnetd@sha256:39494477a3fa001aae716b704a8991f4f62d2ccf1aaaa65692da6c805b18856c
repoTags:
- docker.io/kindest/kindnetd:v20220510-4929dd75
size: "45239873"
- id: sha256:55f4b40fe486a5b734b46bb7bf28f52fa31426bf23be068c8e7b19e58d9b8deb
repoDigests:
- docker.io/library/nginx@sha256:10f14ffa93f8dedf1057897b745e5ac72ac5655c299dade0aa434c71557697ea
repoTags:
- docker.io/library/nginx:latest
size: "56748232"
- id: sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests:
- k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "13585107"
- id: sha256:5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3663ac
repoDigests:
- k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f
repoTags:
- k8s.gcr.io/kube-scheduler:v1.24.2
size: "15488980"
- id: sha256:a45bc6f966ff2ac33f13c7f9c446313da0634f9730e53a4c062f89012e0a2067
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220629175813-10091
size: "1738"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536
repoDigests:
- k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f
repoTags:
- k8s.gcr.io/kube-proxy:v1.24.2
size: "39515830"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165
repoDigests:
- k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c
repoTags:
- k8s.gcr.io/pause:3.7
size: "311278"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220629175813-10091
size: "10823156"
- id: sha256:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b
repoDigests:
- k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5
repoTags:
- k8s.gcr.io/etcd:3.5.3-0
size: "102143581"
- id: sha256:34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df
repoDigests:
- k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.24.2
size: "31035052"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 update-context --alsologtostderr -v=2
E0629 18:00:38.393469   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh pgrep buildkitd: exit status 1 (416.854109ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image build -t localhost/my-image:functional-20220629175813-10091 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220629175813-10091 image build -t localhost/my-image:functional-20220629175813-10091 testdata/build: (3.973378225s)
functional_test.go:318: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20220629175813-10091 image build -t localhost/my-image:functional-20220629175813-10091 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.6s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 2.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:260fb702a95ec0bc2c56afb2076fa91d7f9b53da7f0974493422c50394edae05 done
#8 exporting config sha256:066eba086d876b854484774d774fccbae644f8360edd8603472b75fdb14eed83 done
#8 naming to localhost/my-image:functional-20220629175813-10091 0.0s done
#8 DONE 0.1s
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Done: out/minikube-linux-amd64 -p functional-20220629175813-10091 version -o=json --components: (1.069348714s)
--- PASS: TestFunctional/parallel/Version/components (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.037627019s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220629175813-10091
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220629175813-10091 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220629175813-10091 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [63a992cb-df94-4454-a867-25bbea12d950] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [63a992cb-df94-4454-a867-25bbea12d950] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.082518467s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "618.581366ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1324: Took "85.870887ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629175813-10091

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220629175813-10091 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629175813-10091: (3.780482815s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1361: Took "379.824152ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1374: Took "76.027262ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629175813-10091

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220629175813-10091 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629175813-10091: (3.933178068s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220629175813-10091
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629175813-10091

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220629175813-10091 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629175813-10091: (4.326937573s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629175813-10091 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.106.8.159 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220629175813-10091 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image save gcr.io/google-containers/addon-resizer:functional-20220629175813-10091 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image rm gcr.io/google-containers/addon-resizer:functional-20220629175813-10091

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220629175813-10091 /tmp/TestFunctionalparallelMountCmdany-port2997143784/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1656525627280085361" to /tmp/TestFunctionalparallelMountCmdany-port2997143784/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1656525627280085361" to /tmp/TestFunctionalparallelMountCmdany-port2997143784/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1656525627280085361" to /tmp/TestFunctionalparallelMountCmdany-port2997143784/001/test-1656525627280085361
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (384.62359ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 29 18:00 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 29 18:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 29 18:00 test-1656525627280085361
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh cat /mount-9p/test-1656525627280085361

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220629175813-10091 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [78dcfbb9-722b-4f94-a74b-a56a66bd1bea] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [78dcfbb9-722b-4f94-a74b-a56a66bd1bea] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [78dcfbb9-722b-4f94-a74b-a56a66bd1bea] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [78dcfbb9-722b-4f94-a74b-a56a66bd1bea] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005807193s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220629175813-10091 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220629175813-10091 /tmp/TestFunctionalparallelMountCmdany-port2997143784/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220629175813-10091
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220629175813-10091

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-linux-amd64 -p functional-20220629175813-10091 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220629175813-10091: (1.261249419s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220629175813-10091
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220629175813-10091 /tmp/TestFunctionalparallelMountCmdspecific-port2624719033/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (408.333901ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh -- ls -la /mount-9p
E0629 18:00:37.755241   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
E0629 18:00:37.760832   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
E0629 18:00:37.771099   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
E0629 18:00:37.791381   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
E0629 18:00:37.831628   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220629175813-10091 /tmp/TestFunctionalparallelMountCmdspecific-port2624719033/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo umount -f /mount-9p"
E0629 18:00:38.072959   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh "sudo umount -f /mount-9p": exit status 1 (402.597658ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-20220629175813-10091 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220629175813-10091 /tmp/TestFunctionalparallelMountCmdspecific-port2624719033/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.34s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220629175813-10091
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220629175813-10091
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220629175813-10091
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (67.91s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220629180105-10091 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0629 18:01:18.716758   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
E0629 18:01:59.677053   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220629180105-10091 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m7.914215141s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (67.91s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.62s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220629180105-10091 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220629180105-10091 addons enable ingress --alsologtostderr -v=5: (9.615182705s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.62s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220629180105-10091 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (40.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:164: (dbg) Run:  kubectl --context ingress-addon-legacy-20220629180105-10091 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:164: (dbg) Done: kubectl --context ingress-addon-legacy-20220629180105-10091 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.962950687s)
addons_test.go:184: (dbg) Run:  kubectl --context ingress-addon-legacy-20220629180105-10091 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-20220629180105-10091 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [cf453282-496d-4ae2-97bf-ff00ee6a3a02] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [cf453282-496d-4ae2-97bf-ff00ee6a3a02] Running
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.004539859s
addons_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220629180105-10091 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Run:  kubectl --context ingress-addon-legacy-20220629180105-10091 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220629180105-10091 ip
addons_test.go:249: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220629180105-10091 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:258: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220629180105-10091 addons disable ingress-dns --alsologtostderr -v=1: (8.805737407s)
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220629180105-10091 addons disable ingress --alsologtostderr -v=1
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220629180105-10091 addons disable ingress --alsologtostderr -v=1: (7.259453565s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (40.38s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220629180306-10091 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0629 18:03:21.600437   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220629180306-10091 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (56.928558544s)
--- PASS: TestJSONOutput/start/Command (56.93s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220629180306-10091 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220629180306-10091 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (20.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220629180306-10091 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220629180306-10091 --output=json --user=testUser: (20.156804801s)
--- PASS: TestJSONOutput/stop/Command (20.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.29s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220629180429-10091 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220629180429-10091 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.059977ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a676acd1-2bde-4900-97e0-a4d68fc831ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220629180429-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8109e65-2bc8-4769-b09b-b49316301d47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14420"}}
	{"specversion":"1.0","id":"7cb6ba99-c02f-469e-8bee-b039f0d9d731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ee22a72d-552d-4198-859d-75facdd248da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig"}}
	{"specversion":"1.0","id":"98315712-1235-4679-866c-3eb2ee4d0ffe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube"}}
	{"specversion":"1.0","id":"207a8c92-4577-4e7a-8982-6258bd433527","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7dace7f4-c647-40ea-a15e-e6b7264cc2b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220629180429-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220629180429-10091
--- PASS: TestErrorJSONOutput (0.29s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220629180430-10091 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220629180430-10091 --network=: (33.512317659s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220629180430-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220629180430-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220629180430-10091: (2.155583s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220629180505-10091 --network=bridge
E0629 18:05:12.202342   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:05:12.207667   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:05:12.217926   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:05:12.238213   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:05:12.278470   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:05:12.358802   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:05:12.519261   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:05:13.046031   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:05:13.687009   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:05:14.967209   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:05:17.528326   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:05:22.649447   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220629180505-10091 --network=bridge: (26.504712656s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220629180505-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220629180505-10091
E0629 18:05:32.889917   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220629180505-10091: (2.083688625s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.62s)

                                                
                                    
x
+
TestKicExistingNetwork (29.45s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220629180534-10091 --network=existing-network
E0629 18:05:37.754611   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
E0629 18:05:53.370154   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220629180534-10091 --network=existing-network: (27.234901125s)
helpers_test.go:175: Cleaning up "existing-network-20220629180534-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220629180534-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220629180534-10091: (2.015707741s)
--- PASS: TestKicExistingNetwork (29.45s)

                                                
                                    
x
+
TestKicCustomSubnet (28.86s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220629180603-10091 --subnet=192.168.60.0/24
E0629 18:06:05.441385   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220629180603-10091 --subnet=192.168.60.0/24: (26.664699722s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220629180603-10091 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220629180603-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220629180603-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220629180603-10091: (2.162384888s)
--- PASS: TestKicCustomSubnet (28.86s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (52.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-20220629180632-10091 --driver=docker  --container-runtime=containerd
E0629 18:06:34.330896   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-20220629180632-10091 --driver=docker  --container-runtime=containerd: (23.336939897s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-20220629180632-10091 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-20220629180632-10091 --driver=docker  --container-runtime=containerd: (23.40911031s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-20220629180632-10091
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-20220629180632-10091
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220629180632-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-20220629180632-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-20220629180632-10091: (2.197921565s)
helpers_test.go:175: Cleaning up "first-20220629180632-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-20220629180632-10091
E0629 18:07:23.145811   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
E0629 18:07:23.151058   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
E0629 18:07:23.161302   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
E0629 18:07:23.181624   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
E0629 18:07:23.221970   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
E0629 18:07:23.302791   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
E0629 18:07:23.463210   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
E0629 18:07:23.783783   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
E0629 18:07:24.424717   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-20220629180632-10091: (2.274294749s)
--- PASS: TestMinikubeProfile (52.50s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220629180725-10091 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0629 18:07:25.705383   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
E0629 18:07:28.266051   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220629180725-10091 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.910784726s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220629180725-10091 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220629180725-10091 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0629 18:07:33.386929   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220629180725-10091 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.918875844s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220629180725-10091 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.8s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220629180725-10091 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220629180725-10091 --alsologtostderr -v=5: (1.798117118s)
--- PASS: TestMountStart/serial/DeleteFirst (1.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220629180725-10091 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220629180725-10091
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220629180725-10091: (1.276260567s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.65s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220629180725-10091
E0629 18:07:43.627573   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220629180725-10091: (5.654231479s)
--- PASS: TestMountStart/serial/RestartStopped (6.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220629180725-10091 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220629180748-10091 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0629 18:07:56.251993   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:08:04.108019   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
E0629 18:08:45.069125   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220629180748-10091 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m20.584457765s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- rollout status deployment/busybox: (2.286050822s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- exec busybox-d46db594c-pmrvp -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- exec busybox-d46db594c-q7s2m -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- exec busybox-d46db594c-pmrvp -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- exec busybox-d46db594c-q7s2m -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- exec busybox-d46db594c-pmrvp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- exec busybox-d46db594c-q7s2m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.10s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- exec busybox-d46db594c-pmrvp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- exec busybox-d46db594c-pmrvp -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- exec busybox-d46db594c-q7s2m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220629180748-10091 -- exec busybox-d46db594c-q7s2m -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (34.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220629180748-10091 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220629180748-10091 -v 3 --alsologtostderr: (33.794793236s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (34.54s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (12.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 cp testdata/cp-test.txt multinode-20220629180748-10091:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 cp multinode-20220629180748-10091:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1660568612/001/cp-test_multinode-20220629180748-10091.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 cp multinode-20220629180748-10091:/home/docker/cp-test.txt multinode-20220629180748-10091-m02:/home/docker/cp-test_multinode-20220629180748-10091_multinode-20220629180748-10091-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091-m02 "sudo cat /home/docker/cp-test_multinode-20220629180748-10091_multinode-20220629180748-10091-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 cp multinode-20220629180748-10091:/home/docker/cp-test.txt multinode-20220629180748-10091-m03:/home/docker/cp-test_multinode-20220629180748-10091_multinode-20220629180748-10091-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091-m03 "sudo cat /home/docker/cp-test_multinode-20220629180748-10091_multinode-20220629180748-10091-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 cp testdata/cp-test.txt multinode-20220629180748-10091-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 cp multinode-20220629180748-10091-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1660568612/001/cp-test_multinode-20220629180748-10091-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 cp multinode-20220629180748-10091-m02:/home/docker/cp-test.txt multinode-20220629180748-10091:/home/docker/cp-test_multinode-20220629180748-10091-m02_multinode-20220629180748-10091.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091 "sudo cat /home/docker/cp-test_multinode-20220629180748-10091-m02_multinode-20220629180748-10091.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 cp multinode-20220629180748-10091-m02:/home/docker/cp-test.txt multinode-20220629180748-10091-m03:/home/docker/cp-test_multinode-20220629180748-10091-m02_multinode-20220629180748-10091-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091-m03 "sudo cat /home/docker/cp-test_multinode-20220629180748-10091-m02_multinode-20220629180748-10091-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 cp testdata/cp-test.txt multinode-20220629180748-10091-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 cp multinode-20220629180748-10091-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1660568612/001/cp-test_multinode-20220629180748-10091-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 cp multinode-20220629180748-10091-m03:/home/docker/cp-test.txt multinode-20220629180748-10091:/home/docker/cp-test_multinode-20220629180748-10091-m03_multinode-20220629180748-10091.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091 "sudo cat /home/docker/cp-test_multinode-20220629180748-10091-m03_multinode-20220629180748-10091.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 cp multinode-20220629180748-10091-m03:/home/docker/cp-test.txt multinode-20220629180748-10091-m02:/home/docker/cp-test_multinode-20220629180748-10091-m03_multinode-20220629180748-10091-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 ssh -n multinode-20220629180748-10091-m02 "sudo cat /home/docker/cp-test_multinode-20220629180748-10091-m03_multinode-20220629180748-10091-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (12.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220629180748-10091 node stop m03: (1.27417482s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220629180748-10091 status: exit status 7 (588.578013ms)

                                                
                                                
-- stdout --
	multinode-20220629180748-10091
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220629180748-10091-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220629180748-10091-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220629180748-10091 status --alsologtostderr: exit status 7 (587.388043ms)

                                                
                                                
-- stdout --
	multinode-20220629180748-10091
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220629180748-10091-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220629180748-10091-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 18:10:03.164895   99625 out.go:296] Setting OutFile to fd 1 ...
	I0629 18:10:03.165076   99625 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:10:03.165087   99625 out.go:309] Setting ErrFile to fd 2...
	I0629 18:10:03.165092   99625 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:10:03.165203   99625 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 18:10:03.165391   99625 out.go:303] Setting JSON to false
	I0629 18:10:03.165411   99625 mustload.go:65] Loading cluster: multinode-20220629180748-10091
	I0629 18:10:03.165703   99625 config.go:178] Loaded profile config "multinode-20220629180748-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:10:03.165717   99625 status.go:253] checking status of multinode-20220629180748-10091 ...
	I0629 18:10:03.166054   99625 cli_runner.go:164] Run: docker container inspect multinode-20220629180748-10091 --format={{.State.Status}}
	I0629 18:10:03.198375   99625 status.go:328] multinode-20220629180748-10091 host status = "Running" (err=<nil>)
	I0629 18:10:03.198400   99625 host.go:66] Checking if "multinode-20220629180748-10091" exists ...
	I0629 18:10:03.198637   99625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629180748-10091
	I0629 18:10:03.228872   99625 host.go:66] Checking if "multinode-20220629180748-10091" exists ...
	I0629 18:10:03.229145   99625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 18:10:03.229184   99625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629180748-10091
	I0629 18:10:03.260040   99625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49227 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/multinode-20220629180748-10091/id_rsa Username:docker}
	I0629 18:10:03.341054   99625 ssh_runner.go:195] Run: systemctl --version
	I0629 18:10:03.344430   99625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 18:10:03.352658   99625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:10:03.450237   99625 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-29 18:10:03.381272311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1033-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:10:03.450882   99625 kubeconfig.go:92] found "multinode-20220629180748-10091" server: "https://192.168.58.2:8443"
	I0629 18:10:03.450910   99625 api_server.go:165] Checking apiserver status ...
	I0629 18:10:03.450946   99625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 18:10:03.459605   99625 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1222/cgroup
	I0629 18:10:03.466754   99625 api_server.go:181] apiserver freezer: "4:freezer:/docker/23fc56865b7b1e98b31a2f4d632336398f837843cc7b108fe3fdd3ebf28b59f5/kubepods/burstable/pod9238954f0b7ce62b54e8ace5c5756c5f/fe5aa2ee1f0bdaa7a49760aea385116a012f1bb98245532ca2392f3ed890b627"
	I0629 18:10:03.466810   99625 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/23fc56865b7b1e98b31a2f4d632336398f837843cc7b108fe3fdd3ebf28b59f5/kubepods/burstable/pod9238954f0b7ce62b54e8ace5c5756c5f/fe5aa2ee1f0bdaa7a49760aea385116a012f1bb98245532ca2392f3ed890b627/freezer.state
	I0629 18:10:03.472922   99625 api_server.go:203] freezer state: "THAWED"
	I0629 18:10:03.472948   99625 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0629 18:10:03.477354   99625 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0629 18:10:03.477379   99625 status.go:419] multinode-20220629180748-10091 apiserver status = Running (err=<nil>)
	I0629 18:10:03.477392   99625 status.go:255] multinode-20220629180748-10091 status: &{Name:multinode-20220629180748-10091 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0629 18:10:03.477413   99625 status.go:253] checking status of multinode-20220629180748-10091-m02 ...
	I0629 18:10:03.477648   99625 cli_runner.go:164] Run: docker container inspect multinode-20220629180748-10091-m02 --format={{.State.Status}}
	I0629 18:10:03.508647   99625 status.go:328] multinode-20220629180748-10091-m02 host status = "Running" (err=<nil>)
	I0629 18:10:03.508668   99625 host.go:66] Checking if "multinode-20220629180748-10091-m02" exists ...
	I0629 18:10:03.509023   99625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629180748-10091-m02
	I0629 18:10:03.539122   99625 host.go:66] Checking if "multinode-20220629180748-10091-m02" exists ...
	I0629 18:10:03.539352   99625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 18:10:03.539384   99625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629180748-10091-m02
	I0629 18:10:03.569365   99625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49232 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/multinode-20220629180748-10091-m02/id_rsa Username:docker}
	I0629 18:10:03.648898   99625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 18:10:03.657280   99625 status.go:255] multinode-20220629180748-10091-m02 status: &{Name:multinode-20220629180748-10091-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0629 18:10:03.657316   99625 status.go:253] checking status of multinode-20220629180748-10091-m03 ...
	I0629 18:10:03.657613   99625 cli_runner.go:164] Run: docker container inspect multinode-20220629180748-10091-m03 --format={{.State.Status}}
	I0629 18:10:03.690841   99625 status.go:328] multinode-20220629180748-10091-m03 host status = "Stopped" (err=<nil>)
	I0629 18:10:03.690862   99625 status.go:341] host is not running, skipping remaining checks
	I0629 18:10:03.690869   99625 status.go:255] multinode-20220629180748-10091-m03 status: &{Name:multinode-20220629180748-10091-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 node start m03 --alsologtostderr
E0629 18:10:06.991508   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
E0629 18:10:12.200970   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220629180748-10091 node start m03 --alsologtostderr: (30.564975231s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (157.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220629180748-10091
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220629180748-10091
E0629 18:10:37.754645   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
E0629 18:10:40.092992   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220629180748-10091: (41.192360738s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220629180748-10091 --wait=true -v=8 --alsologtostderr
E0629 18:12:23.145622   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
E0629 18:12:50.832993   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220629180748-10091 --wait=true -v=8 --alsologtostderr: (1m56.316755986s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220629180748-10091
--- PASS: TestMultiNode/serial/RestartKeepsNodes (157.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220629180748-10091 node delete m03: (4.368662764s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220629180748-10091 stop: (40.035959637s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220629180748-10091 status: exit status 7 (121.975535ms)

                                                
                                                
-- stdout --
	multinode-20220629180748-10091
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220629180748-10091-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220629180748-10091 status --alsologtostderr: exit status 7 (123.35725ms)

                                                
                                                
-- stdout --
	multinode-20220629180748-10091
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220629180748-10091-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 18:13:57.994261  110019 out.go:296] Setting OutFile to fd 1 ...
	I0629 18:13:57.994372  110019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:13:57.994382  110019 out.go:309] Setting ErrFile to fd 2...
	I0629 18:13:57.994386  110019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:13:57.994489  110019 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 18:13:57.994852  110019 out.go:303] Setting JSON to false
	I0629 18:13:57.994880  110019 mustload.go:65] Loading cluster: multinode-20220629180748-10091
	I0629 18:13:57.995717  110019 config.go:178] Loaded profile config "multinode-20220629180748-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:13:57.995813  110019 status.go:253] checking status of multinode-20220629180748-10091 ...
	I0629 18:13:57.996540  110019 cli_runner.go:164] Run: docker container inspect multinode-20220629180748-10091 --format={{.State.Status}}
	I0629 18:13:58.027018  110019 status.go:328] multinode-20220629180748-10091 host status = "Stopped" (err=<nil>)
	I0629 18:13:58.027040  110019 status.go:341] host is not running, skipping remaining checks
	I0629 18:13:58.027049  110019 status.go:255] multinode-20220629180748-10091 status: &{Name:multinode-20220629180748-10091 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0629 18:13:58.027078  110019 status.go:253] checking status of multinode-20220629180748-10091-m02 ...
	I0629 18:13:58.027310  110019 cli_runner.go:164] Run: docker container inspect multinode-20220629180748-10091-m02 --format={{.State.Status}}
	I0629 18:13:58.055583  110019 status.go:328] multinode-20220629180748-10091-m02 host status = "Stopped" (err=<nil>)
	I0629 18:13:58.055603  110019 status.go:341] host is not running, skipping remaining checks
	I0629 18:13:58.055611  110019 status.go:255] multinode-20220629180748-10091-m02 status: &{Name:multinode-20220629180748-10091-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220629180748-10091 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0629 18:15:12.198962   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220629180748-10091 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m19.951788974s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220629180748-10091 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220629180748-10091
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220629180748-10091-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220629180748-10091-m02 --driver=docker  --container-runtime=containerd: exit status 14 (83.442942ms)

                                                
                                                
-- stdout --
	* [multinode-20220629180748-10091-m02] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220629180748-10091-m02' is duplicated with machine name 'multinode-20220629180748-10091-m02' in profile 'multinode-20220629180748-10091'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220629180748-10091-m03 --driver=docker  --container-runtime=containerd
E0629 18:15:37.754691   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220629180748-10091-m03 --driver=docker  --container-runtime=containerd: (23.696695164s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220629180748-10091
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220629180748-10091: exit status 80 (336.659097ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220629180748-10091
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220629180748-10091-m03 already exists in multinode-20220629180748-10091-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220629180748-10091-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220629180748-10091-m03: (2.22907969s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.41s)

                                                
                                    
x
+
TestPreload (114.43s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220629181549-10091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220629181549-10091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m10.479049767s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220629181549-10091 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220629181549-10091 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.001482736s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220629181549-10091 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
E0629 18:17:00.802406   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
E0629 18:17:23.145721   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220629181549-10091 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (40.152983862s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220629181549-10091 -- sudo crictl image ls
helpers_test.go:175: Cleaning up "test-preload-20220629181549-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220629181549-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220629181549-10091: (2.426086231s)
--- PASS: TestPreload (114.43s)

                                                
                                    
x
+
TestScheduledStopUnix (101.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220629181743-10091 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220629181743-10091 --memory=2048 --driver=docker  --container-runtime=containerd: (24.516255002s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220629181743-10091 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220629181743-10091 -n scheduled-stop-20220629181743-10091
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220629181743-10091 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220629181743-10091 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220629181743-10091 -n scheduled-stop-20220629181743-10091
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220629181743-10091
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220629181743-10091 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220629181743-10091
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220629181743-10091: exit status 7 (95.02709ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220629181743-10091
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220629181743-10091 -n scheduled-stop-20220629181743-10091
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220629181743-10091 -n scheduled-stop-20220629181743-10091: exit status 7 (91.705231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220629181743-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220629181743-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220629181743-10091: (5.064878221s)
--- PASS: TestScheduledStopUnix (101.27s)

                                                
                                    
x
+
TestInsufficientStorage (15.97s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220629181924-10091 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220629181924-10091 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.330889519s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"61ceb8a6-4dc1-459b-bc80-b2286366cddb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220629181924-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"048c9793-539f-49a3-9dfe-2f78d0ecb5af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14420"}}
	{"specversion":"1.0","id":"54022698-1f0a-4451-b987-6130674cdfdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0e93dd40-8137-46e0-bf70-e52abcfb49a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig"}}
	{"specversion":"1.0","id":"224a155e-d5d0-4d8b-b2e4-81bbe1716418","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube"}}
	{"specversion":"1.0","id":"d0fc8ff9-bab4-40b8-bb70-23456f4d7645","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5af7ac6b-ac44-48a7-b8ea-d1e14e648313","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c85d6387-9f29-4e98-858e-f979ae7d0650","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"fecd8917-e967-4047-b024-04b2f97d7beb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c0ad1e2d-f1ad-4575-836b-1bbcbdd8b9b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0d8010fe-8d97-4e3e-b41d-d6d0eddbdda9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220629181924-10091 in cluster insufficient-storage-20220629181924-10091","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f70aab4e-a150-4f33-8398-95a23f9371d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3d77941f-d1df-4416-a7fd-c10f220b8ee1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8935a921-3994-40c1-b9f7-b4e07329d3e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220629181924-10091 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220629181924-10091 --output=json --layout=cluster: exit status 7 (344.690027ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220629181924-10091","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220629181924-10091","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 18:19:34.479928  130359 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220629181924-10091" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220629181924-10091 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220629181924-10091 --output=json --layout=cluster: exit status 7 (342.938306ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220629181924-10091","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220629181924-10091","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 18:19:34.823758  130469 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220629181924-10091" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	E0629 18:19:34.831494  130469 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/insufficient-storage-20220629181924-10091/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220629181924-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220629181924-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220629181924-10091: (5.950488447s)
--- PASS: TestInsufficientStorage (15.97s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (105.5s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.1264062493.exe start -p running-upgrade-20220629182131-10091 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0629 18:21:35.453729   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.1264062493.exe start -p running-upgrade-20220629182131-10091 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (56.738827973s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220629182131-10091 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220629182131-10091 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.839866138s)
helpers_test.go:175: Cleaning up "running-upgrade-20220629182131-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220629182131-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220629182131-10091: (6.154594655s)
--- PASS: TestRunningBinaryUpgrade (105.50s)

                                                
                                    
x
+
TestMissingContainerUpgrade (158.3s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.1282125961.exe start -p missing-upgrade-20220629181940-10091 --memory=2200 --driver=docker  --container-runtime=containerd
E0629 18:20:12.199232   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
E0629 18:20:37.754890   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.1282125961.exe start -p missing-upgrade-20220629181940-10091 --memory=2200 --driver=docker  --container-runtime=containerd: (1m27.82405479s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220629181940-10091

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220629181940-10091: (10.440812936s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220629181940-10091
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220629181940-10091 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220629181940-10091 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (55.434203228s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220629181940-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220629181940-10091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220629181940-10091: (3.15304711s)
--- PASS: TestMissingContainerUpgrade (158.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220629181940-10091 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220629181940-10091 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (93.300567ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220629181940-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (60.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220629181940-10091 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220629181940-10091 --driver=docker  --container-runtime=containerd: (1m0.196936556s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220629181940-10091 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (60.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (120.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.2205662922.exe start -p stopped-upgrade-20220629181940-10091 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.2205662922.exe start -p stopped-upgrade-20220629181940-10091 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (59.81560441s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.2205662922.exe -p stopped-upgrade-20220629181940-10091 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.2205662922.exe -p stopped-upgrade-20220629181940-10091 stop: (3.270646398s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220629181940-10091 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220629181940-10091 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (57.783988753s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (120.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220629181940-10091 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220629181940-10091 --no-kubernetes --driver=docker  --container-runtime=containerd: (19.429215723s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220629181940-10091 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220629181940-10091 status -o json: exit status 2 (362.290551ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220629181940-10091","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220629181940-10091
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220629181940-10091: (2.996904666s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220629181940-10091 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220629181940-10091 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.743551802s)
--- PASS: TestNoKubernetes/serial/Start (9.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220629181940-10091 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220629181940-10091 "sudo systemctl is-active --quiet service kubelet": exit status 1 (422.852243ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (7.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (6.781352083s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (7.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220629181940-10091
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220629181940-10091: (1.368484832s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220629181940-10091 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220629181940-10091 --driver=docker  --container-runtime=containerd: (5.807953962s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220629181940-10091 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220629181940-10091 "sudo systemctl is-active --quiet service kubelet": exit status 1 (382.838386ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220629181940-10091
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20220629181940-10091: (1.19889843s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestPause/serial/Start (54.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220629182145-10091 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220629182145-10091 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (54.239914595s)
--- PASS: TestPause/serial/Start (54.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (15.78s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220629182145-10091 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220629182145-10091 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (15.768045738s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (15.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220629182253-10091 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20220629182253-10091 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (251.394841ms)

                                                
                                                
-- stdout --
	* [false-20220629182253-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 18:22:53.165281  171675 out.go:296] Setting OutFile to fd 1 ...
	I0629 18:22:53.165672  171675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:22:53.165688  171675 out.go:309] Setting ErrFile to fd 2...
	I0629 18:22:53.165696  171675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 18:22:53.166427  171675 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 18:22:53.167087  171675 out.go:303] Setting JSON to false
	I0629 18:22:53.168525  171675 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3923,"bootTime":1656523050,"procs":578,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1033-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0629 18:22:53.168590  171675 start.go:125] virtualization: kvm guest
	I0629 18:22:53.170762  171675 out.go:177] * [false-20220629182253-10091] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0629 18:22:53.172688  171675 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 18:22:53.172749  171675 notify.go:193] Checking for updates...
	I0629 18:22:53.174061  171675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 18:22:53.175385  171675 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 18:22:53.176748  171675 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 18:22:53.178153  171675 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0629 18:22:53.179795  171675 config.go:178] Loaded profile config "kubernetes-upgrade-20220629182055-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:22:53.179933  171675 config.go:178] Loaded profile config "pause-20220629182145-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
	I0629 18:22:53.180024  171675 config.go:178] Loaded profile config "running-upgrade-20220629182131-10091": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0629 18:22:53.180080  171675 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 18:22:53.219326  171675 docker.go:137] docker version: linux-20.10.17
	I0629 18:22:53.219419  171675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 18:22:53.339984  171675 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:65 SystemTime:2022-06-29 18:22:53.250021056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1033-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 18:22:53.340088  171675 docker.go:254] overlay module found
	I0629 18:22:53.342445  171675 out.go:177] * Using the docker driver based on user configuration
	I0629 18:22:53.343977  171675 start.go:284] selected driver: docker
	I0629 18:22:53.344000  171675 start.go:808] validating driver "docker" against <nil>
	I0629 18:22:53.344024  171675 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 18:22:53.346643  171675 out.go:177] 
	W0629 18:22:53.348096  171675 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0629 18:22:53.349479  171675 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-20220629182253-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20220629182253-10091
--- PASS: TestNetworkPlugins/group/false (0.51s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220629182145-10091 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220629182145-10091 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220629182145-10091 --output=json --layout=cluster: exit status 2 (382.92763ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220629182145-10091","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220629182145-10091","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220629182145-10091 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220629182145-10091 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220629182145-10091 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220629182145-10091 --alsologtostderr -v=5: (5.981579565s)
--- PASS: TestPause/serial/DeletePaused (5.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (5.73s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (5.621412217s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220629182145-10091
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220629182145-10091: exit status 1 (33.663958ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220629182145-10091

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (5.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (106.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220629182346-10091 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220629182346-10091 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m46.061981339s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (106.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (60.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220629182400-10091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220629182400-10091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: (1m0.612430505s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (60.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220629182400-10091 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [861037a1-97f8-4e56-b5d4-1f281050c19d] Pending
helpers_test.go:342: "busybox" [861037a1-97f8-4e56-b5d4-1f281050c19d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [861037a1-97f8-4e56-b5d4-1f281050c19d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.013200549s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220629182400-10091 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220629182400-10091 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220629182400-10091 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220629182400-10091 --alsologtostderr -v=3
E0629 18:25:12.198951   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220629182400-10091 --alsologtostderr -v=3: (20.137766613s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220629182400-10091 -n no-preload-20220629182400-10091
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220629182400-10091 -n no-preload-20220629182400-10091: exit status 7 (103.241369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220629182400-10091 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (323.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220629182400-10091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220629182400-10091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: (5m23.398890831s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220629182400-10091 -n no-preload-20220629182400-10091
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (323.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220629182346-10091 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [d52936f4-d1c2-441c-8092-0129be46f537] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [d52936f4-d1c2-441c-8092-0129be46f537] Running
E0629 18:25:37.755186   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.01116914s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220629182346-10091 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220629182346-10091 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220629182346-10091 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220629182346-10091 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220629182346-10091 --alsologtostderr -v=3: (20.115342516s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220629182346-10091 -n old-k8s-version-20220629182346-10091
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220629182346-10091 -n old-k8s-version-20220629182346-10091: exit status 7 (102.441477ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220629182346-10091 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (419.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220629182346-10091 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220629182346-10091 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (6m59.274199686s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220629182346-10091 -n old-k8s-version-20220629182346-10091
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (419.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (45.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220629182651-10091 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2
E0629 18:27:23.145697   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220629182651-10091 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: (45.978928772s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (45.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220629182651-10091 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [461598ca-0753-4c8d-a804-bf32db53c9fa] Pending
helpers_test.go:342: "busybox" [461598ca-0753-4c8d-a804-bf32db53c9fa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [461598ca-0753-4c8d-a804-bf32db53c9fa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.011100185s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220629182651-10091 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220629182651-10091 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220629182651-10091 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220629182651-10091 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220629182651-10091 --alsologtostderr -v=3: (20.097598627s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220629182651-10091 -n default-k8s-different-port-20220629182651-10091
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220629182651-10091 -n default-k8s-different-port-20220629182651-10091: exit status 7 (96.138243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220629182651-10091 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (320.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220629182651-10091 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2
E0629 18:30:12.198905   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220629182651-10091 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: (5m20.523413569s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220629182651-10091 -n default-k8s-different-port-20220629182651-10091
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (320.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220629183022-10091 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2
E0629 18:30:37.755261   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220629183022-10091 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: (47.534580567s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-tpwbn" [9b7dd82d-ab4a-4652-af88-6bd93865e3ae] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011889374s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-tpwbn" [9b7dd82d-ab4a-4652-af88-6bd93865e3ae] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006424363s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-20220629182400-10091 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220629182400-10091 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220629182400-10091 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220629182400-10091 -n no-preload-20220629182400-10091
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220629182400-10091 -n no-preload-20220629182400-10091: exit status 2 (384.128581ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220629182400-10091 -n no-preload-20220629182400-10091
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220629182400-10091 -n no-preload-20220629182400-10091: exit status 2 (378.63595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220629182400-10091 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220629182400-10091 -n no-preload-20220629182400-10091
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220629182400-10091 -n no-preload-20220629182400-10091
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220629183022-10091 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220629183022-10091 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220629183022-10091 --alsologtostderr -v=3: (20.15220365s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (57.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220629183112-10091 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220629183112-10091 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: (57.867884645s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (57.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220629183022-10091 -n newest-cni-20220629183022-10091
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220629183022-10091 -n newest-cni-20220629183022-10091: exit status 7 (108.701476ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220629183022-10091 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220629183022-10091 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220629183022-10091 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: (30.089111117s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220629183022-10091 -n newest-cni-20220629183022-10091
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220629183022-10091 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220629183022-10091 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220629183022-10091 -n newest-cni-20220629183022-10091
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220629183022-10091 -n newest-cni-20220629183022-10091: exit status 2 (439.789293ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220629183022-10091 -n newest-cni-20220629183022-10091
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220629183022-10091 -n newest-cni-20220629183022-10091: exit status 2 (434.853199ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220629183022-10091 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220629183022-10091 -n newest-cni-20220629183022-10091
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220629183022-10091 -n newest-cni-20220629183022-10091
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220629182252-10091 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220629182252-10091 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (1m20.144707374s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220629183112-10091 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [ec2f580c-758a-496a-8b4e-89b673541b8f] Pending
helpers_test.go:342: "busybox" [ec2f580c-758a-496a-8b4e-89b673541b8f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [ec2f580c-758a-496a-8b4e-89b673541b8f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.012689767s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220629183112-10091 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220629183112-10091 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220629183112-10091 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220629183112-10091 --alsologtostderr -v=3
E0629 18:32:23.145687   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629180105-10091/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220629183112-10091 --alsologtostderr -v=3: (20.165605093s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220629183112-10091 -n embed-certs-20220629183112-10091
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220629183112-10091 -n embed-certs-20220629183112-10091: exit status 7 (97.672662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220629183112-10091 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (557.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220629183112-10091 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220629183112-10091 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.2: (9m16.68092955s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220629183112-10091 -n embed-certs-20220629183112-10091
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (557.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-f7crf" [c4304e08-10d4-499e-8d52-45e76c0f6f9a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012549129s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-f7crf" [c4304e08-10d4-499e-8d52-45e76c0f6f9a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006164326s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220629182346-10091 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220629182346-10091 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220629182346-10091 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220629182346-10091 -n old-k8s-version-20220629182346-10091
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220629182346-10091 -n old-k8s-version-20220629182346-10091: exit status 2 (388.146907ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220629182346-10091 -n old-k8s-version-20220629182346-10091
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220629182346-10091 -n old-k8s-version-20220629182346-10091: exit status 2 (415.380795ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220629182346-10091 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220629182346-10091 -n old-k8s-version-20220629182346-10091
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220629182346-10091 -n old-k8s-version-20220629182346-10091
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220629182253-10091 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220629182253-10091 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m2.985763598s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-lptsd" [fb15c210-6528-46a4-936e-397027c17f98] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011364118s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220629182252-10091 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220629182252-10091 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-xzhmc" [1ed2818b-c7ea-4964-8e6b-7be8344d140a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-xzhmc" [1ed2818b-c7ea-4964-8e6b-7be8344d140a] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.006140471s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-lptsd" [fb15c210-6528-46a4-936e-397027c17f98] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006666613s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-different-port-20220629182651-10091 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220629182252-10091 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220629182252-10091 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220629182252-10091 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220629182651-10091 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220629182651-10091 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220629182651-10091 -n default-k8s-different-port-20220629182651-10091
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220629182651-10091 -n default-k8s-different-port-20220629182651-10091: exit status 2 (423.725448ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220629182651-10091 -n default-k8s-different-port-20220629182651-10091
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220629182651-10091 -n default-k8s-different-port-20220629182651-10091: exit status 2 (435.662404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20220629182651-10091 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220629182651-10091 -n default-k8s-different-port-20220629182651-10091
E0629 18:33:40.803458   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220629182651-10091 -n default-k8s-different-port-20220629182651-10091
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (3.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (79.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220629182253-10091 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220629182253-10091 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m19.024339974s)
--- PASS: TestNetworkPlugins/group/cilium/Start (79.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-ks85c" [c8b41053-61a5-4fd9-b26d-42069a0c2a3c] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.015353753s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220629182253-10091 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220629182253-10091 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-885hx" [b050ad52-ee06-4264-877c-44ea2ec25f33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-885hx" [b050ad52-ee06-4264-877c-44ea2ec25f33] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.009101919s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220629182253-10091 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220629182253-10091 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220629182253-10091 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (300.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220629182252-10091 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220629182252-10091 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (5m0.567617307s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (300.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-mfjfx" [089475b2-a2a7-45e0-95b3-79432b4d52ca] Running
E0629 18:35:01.482470   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:35:01.487810   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:35:01.498065   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:35:01.518823   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:35:01.559084   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:35:01.639411   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:35:01.800337   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:35:02.121016   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:35:02.762192   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.01452249s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220629182253-10091 "pgrep -a kubelet"
E0629 18:35:04.042870   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220629182253-10091 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-gf6vk" [6e92fa77-da98-472d-b2b9-48ceb9ae3e9e] Pending
helpers_test.go:342: "netcat-869c55b6dc-gf6vk" [6e92fa77-da98-472d-b2b9-48ceb9ae3e9e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0629 18:35:06.603665   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
helpers_test.go:342: "netcat-869c55b6dc-gf6vk" [6e92fa77-da98-472d-b2b9-48ceb9ae3e9e] Running
E0629 18:35:11.724686   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:35:12.198520   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629175813-10091/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.005716583s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220629182253-10091 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220629182253-10091 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220629182253-10091 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (36.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220629182252-10091 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd
E0629 18:35:21.965053   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:35:32.908292   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
E0629 18:35:32.913553   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
E0629 18:35:32.923778   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
E0629 18:35:32.944027   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
E0629 18:35:32.984342   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
E0629 18:35:33.064657   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
E0629 18:35:33.225242   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
E0629 18:35:33.546334   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
E0629 18:35:34.186811   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
E0629 18:35:35.467272   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
E0629 18:35:37.755097   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629175321-10091/client.crt: no such file or directory
E0629 18:35:38.027705   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
E0629 18:35:42.445452   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629182400-10091/client.crt: no such file or directory
E0629 18:35:43.147957   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
E0629 18:35:53.389099   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629182346-10091/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220629182252-10091 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (36.855534373s)
--- PASS: TestNetworkPlugins/group/bridge/Start (36.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220629182252-10091 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220629182252-10091 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-hd6fc" [61213722-1cd8-4b5f-b7b1-9be5a2223d96] Pending
helpers_test.go:342: "netcat-869c55b6dc-hd6fc" [61213722-1cd8-4b5f-b7b1-9be5a2223d96] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-hd6fc" [61213722-1cd8-4b5f-b7b1-9be5a2223d96] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.008993044s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220629182252-10091 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220629182252-10091 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-5jkng" [d9db6e7a-5573-4a1e-8d88-9f0125989588] Pending
helpers_test.go:342: "netcat-869c55b6dc-5jkng" [d9db6e7a-5573-4a1e-8d88-9f0125989588] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0629 18:39:40.667029   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory
helpers_test.go:342: "netcat-869c55b6dc-5jkng" [d9db6e7a-5573-4a1e-8d88-9f0125989588] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.006098247s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-2p5bh" [eb472d4a-d0d6-4fac-b7c2-52ddd14ec666] Running
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-2p5bh" [eb472d4a-d0d6-4fac-b7c2-52ddd14ec666] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010986447s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-2p5bh" [eb472d4a-d0d6-4fac-b7c2-52ddd14ec666] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0629 18:42:04.029702   10091 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14420-3561-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629182253-10091/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007027104s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-20220629183112-10091 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220629183112-10091 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220629183112-10091 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220629183112-10091 -n embed-certs-20220629183112-10091
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220629183112-10091 -n embed-certs-20220629183112-10091: exit status 2 (391.165497ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220629183112-10091 -n embed-certs-20220629183112-10091
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220629183112-10091 -n embed-certs-20220629183112-10091: exit status 2 (387.096087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220629183112-10091 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220629183112-10091 -n embed-certs-20220629183112-10091
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220629183112-10091 -n embed-certs-20220629183112-10091
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    

Test skip (23/275)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.24.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.24.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.24.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:455: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220629183112-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220629183112-10091
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-20220629182252-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20220629182252-10091
--- SKIP: TestNetworkPlugins/group/kubenet (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220629182252-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220629182252-10091
--- SKIP: TestNetworkPlugins/group/flannel (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220629182253-10091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-20220629182253-10091
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.30s)

                                                
                                    
Copied to clipboard