Test Report: Docker_Linux_containerd 14555

                    
                      f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd:2022-07-28:25060
                    
                

Test fail (5/273)

Order failed test Duration
211 TestKubernetesUpgrade 584.47
312 TestNetworkPlugins/group/calico/Start 532.33
316 TestNetworkPlugins/group/kindnet/DNS 368.22
326 TestNetworkPlugins/group/enable-default-cni/DNS 307.76
334 TestNetworkPlugins/group/bridge/DNS 299.54
x
+
TestKubernetesUpgrade (584.47s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220728205630-9812 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220728205630-9812 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (49.801526054s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220728205630-9812
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220728205630-9812: (2.395008557s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220728205630-9812 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220728205630-9812 status --format={{.Host}}: exit status 7 (136.823578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220728205630-9812 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220728205630-9812 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (8m47.214151725s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220728205630-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-20220728205630-9812 in cluster kubernetes-upgrade-20220728205630-9812
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-20220728205630-9812" ...
	* Preparing Kubernetes v1.24.3 on containerd 1.6.6 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Jul 28 21:06:09 kubernetes-upgrade-20220728205630-9812 kubelet[11608]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 20:57:22.866399  160802 out.go:296] Setting OutFile to fd 1 ...
	I0728 20:57:22.866524  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:57:22.866534  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 20:57:22.866541  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:57:22.866690  160802 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 20:57:22.867437  160802 out.go:303] Setting JSON to false
	I0728 20:57:22.869980  160802 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2393,"bootTime":1659039450,"procs":941,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0728 20:57:22.870074  160802 start.go:125] virtualization: kvm guest
	I0728 20:57:22.872793  160802 out.go:177] * [kubernetes-upgrade-20220728205630-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0728 20:57:22.874928  160802 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 20:57:22.874850  160802 notify.go:193] Checking for updates...
	I0728 20:57:22.874990  160802 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0728 20:57:22.877463  160802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 20:57:22.879433  160802 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 20:57:22.881270  160802 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 20:57:22.883209  160802 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0728 20:57:22.887381  160802 config.go:178] Loaded profile config "kubernetes-upgrade-20220728205630-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0728 20:57:22.888391  160802 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 20:57:22.961516  160802 docker.go:137] docker version: linux-20.10.17
	I0728 20:57:22.961654  160802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 20:57:23.046603  160802 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-containerd-overlay2-amd64.tar.lz4.checksum
	I0728 20:57:23.157053  160802 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:71 SystemTime:2022-07-28 20:57:23.020323245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 20:57:23.157164  160802 docker.go:254] overlay module found
	I0728 20:57:23.160556  160802 out.go:177] * Using the docker driver based on existing profile
	I0728 20:57:23.161955  160802 start.go:284] selected driver: docker
	I0728 20:57:23.161984  160802 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-20220728205630-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220728205630-9
812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 20:57:23.162137  160802 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 20:57:23.163457  160802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 20:57:23.314494  160802 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:71 SystemTime:2022-07-28 20:57:23.19970577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 20:57:23.314857  160802 cni.go:95] Creating CNI manager for ""
	I0728 20:57:23.314919  160802 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0728 20:57:23.314937  160802 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220728205630-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220728205630-9812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 20:57:23.317480  160802 out.go:177] * Starting control plane node kubernetes-upgrade-20220728205630-9812 in cluster kubernetes-upgrade-20220728205630-9812
	I0728 20:57:23.318985  160802 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0728 20:57:23.320334  160802 out.go:177] * Pulling base image ...
	I0728 20:57:23.321840  160802 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0728 20:57:23.321879  160802 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 20:57:23.321906  160802 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4
	I0728 20:57:23.321920  160802 cache.go:57] Caching tarball of preloaded images
	I0728 20:57:23.322218  160802 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 20:57:23.322242  160802 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on containerd
	I0728 20:57:23.322442  160802 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/config.json ...
	I0728 20:57:23.377916  160802 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 20:57:23.377951  160802 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 20:57:23.377974  160802 cache.go:208] Successfully downloaded all kic artifacts
	I0728 20:57:23.378027  160802 start.go:370] acquiring machines lock for kubernetes-upgrade-20220728205630-9812: {Name:mk7be54e287cbff99b673df45d9b1f000bca8d24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 20:57:23.378178  160802 start.go:374] acquired machines lock for "kubernetes-upgrade-20220728205630-9812" in 98.875µs
	I0728 20:57:23.378207  160802 start.go:95] Skipping create...Using existing machine configuration
	I0728 20:57:23.378218  160802 fix.go:55] fixHost starting: 
	I0728 20:57:23.378543  160802 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220728205630-9812 --format={{.State.Status}}
	I0728 20:57:23.430980  160802 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220728205630-9812: state=Stopped err=<nil>
	W0728 20:57:23.431017  160802 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 20:57:23.436072  160802 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220728205630-9812" ...
	I0728 20:57:23.437700  160802 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220728205630-9812
	I0728 20:57:23.977820  160802 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220728205630-9812 --format={{.State.Status}}
	I0728 20:57:24.042235  160802 kic.go:415] container "kubernetes-upgrade-20220728205630-9812" state is running.
	I0728 20:57:24.042750  160802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220728205630-9812
	I0728 20:57:24.094180  160802 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/config.json ...
	I0728 20:57:24.094428  160802 machine.go:88] provisioning docker machine ...
	I0728 20:57:24.094453  160802 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220728205630-9812"
	I0728 20:57:24.094502  160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
	I0728 20:57:24.146439  160802 main.go:134] libmachine: Using SSH client type: native
	I0728 20:57:24.146650  160802 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil>  [] 0s} 127.0.0.1 49337 <nil> <nil>}
	I0728 20:57:24.146670  160802 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220728205630-9812 && echo "kubernetes-upgrade-20220728205630-9812" | sudo tee /etc/hostname
	I0728 20:57:24.147615  160802 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34250->127.0.0.1:49337: read: connection reset by peer
	I0728 20:57:27.293929  160802 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220728205630-9812
	
	I0728 20:57:27.294016  160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
	I0728 20:57:27.349629  160802 main.go:134] libmachine: Using SSH client type: native
	I0728 20:57:27.349838  160802 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil>  [] 0s} 127.0.0.1 49337 <nil> <nil>}
	I0728 20:57:27.349875  160802 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220728205630-9812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220728205630-9812/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220728205630-9812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 20:57:27.485165  160802 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 20:57:27.485218  160802 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 20:57:27.485259  160802 ubuntu.go:177] setting up certificates
	I0728 20:57:27.485271  160802 provision.go:83] configureAuth start
	I0728 20:57:27.485388  160802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220728205630-9812
	I0728 20:57:27.549901  160802 provision.go:138] copyHostCerts
	I0728 20:57:27.549980  160802 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 20:57:27.550000  160802 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 20:57:27.550089  160802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1078 bytes)
	I0728 20:57:27.550229  160802 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 20:57:27.550247  160802 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 20:57:27.550293  160802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 20:57:27.550376  160802 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 20:57:27.550387  160802 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 20:57:27.550424  160802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 20:57:27.550484  160802 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220728205630-9812 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220728205630-9812]
	I0728 20:57:27.634672  160802 provision.go:172] copyRemoteCerts
	I0728 20:57:27.634765  160802 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 20:57:27.634829  160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
	I0728 20:57:27.679230  160802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728205630-9812/id_rsa Username:docker}
	I0728 20:57:27.773133  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 20:57:27.795592  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0728 20:57:27.817587  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 20:57:27.842683  160802 provision.go:86] duration metric: configureAuth took 357.40006ms
	I0728 20:57:27.842711  160802 ubuntu.go:193] setting minikube options for container-runtime
	I0728 20:57:27.842965  160802 config.go:178] Loaded profile config "kubernetes-upgrade-20220728205630-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 20:57:27.842986  160802 machine.go:91] provisioned docker machine in 3.748541238s
	I0728 20:57:27.842997  160802 start.go:307] post-start starting for "kubernetes-upgrade-20220728205630-9812" (driver="docker")
	I0728 20:57:27.843005  160802 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 20:57:27.843059  160802 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 20:57:27.843101  160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
	I0728 20:57:27.892044  160802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728205630-9812/id_rsa Username:docker}
	I0728 20:57:28.007429  160802 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 20:57:28.012518  160802 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 20:57:28.012548  160802 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 20:57:28.012561  160802 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 20:57:28.012569  160802 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 20:57:28.012582  160802 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 20:57:28.012639  160802 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 20:57:28.012738  160802 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem -> 98122.pem in /etc/ssl/certs
	I0728 20:57:28.012857  160802 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 20:57:28.028732  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem --> /etc/ssl/certs/98122.pem (1708 bytes)
	I0728 20:57:28.063656  160802 start.go:310] post-start completed in 220.643878ms
	I0728 20:57:28.063742  160802 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 20:57:28.063789  160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
	I0728 20:57:28.105141  160802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728205630-9812/id_rsa Username:docker}
	I0728 20:57:28.195810  160802 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 20:57:28.200454  160802 fix.go:57] fixHost completed within 4.822228352s
	I0728 20:57:28.200488  160802 start.go:82] releasing machines lock for "kubernetes-upgrade-20220728205630-9812", held for 4.822292833s
	I0728 20:57:28.200590  160802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220728205630-9812
	I0728 20:57:28.243411  160802 ssh_runner.go:195] Run: systemctl --version
	I0728 20:57:28.243440  160802 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 20:57:28.243476  160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
	I0728 20:57:28.243504  160802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728205630-9812
	I0728 20:57:28.287248  160802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728205630-9812/id_rsa Username:docker}
	I0728 20:57:28.288096  160802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728205630-9812/id_rsa Username:docker}
	I0728 20:57:28.410776  160802 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 20:57:28.427057  160802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 20:57:28.439749  160802 docker.go:188] disabling docker service ...
	I0728 20:57:28.439806  160802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0728 20:57:28.453023  160802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0728 20:57:28.464039  160802 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0728 20:57:28.579586  160802 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0728 20:57:28.672569  160802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0728 20:57:28.684090  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 20:57:28.700124  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0728 20:57:28.709944  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0728 20:57:28.721125  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0728 20:57:28.730446  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0728 20:57:28.741123  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0728 20:57:28.752176  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0728 20:57:28.770413  160802 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 20:57:28.778286  160802 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 20:57:28.787192  160802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 20:57:28.881447  160802 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 20:57:28.967021  160802 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0728 20:57:28.967083  160802 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0728 20:57:28.970732  160802 start.go:471] Will wait 60s for crictl version
	I0728 20:57:28.970799  160802 ssh_runner.go:195] Run: sudo crictl version
	I0728 20:57:29.008345  160802 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-28T20:57:29Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0728 20:57:40.055149  160802 ssh_runner.go:195] Run: sudo crictl version
	I0728 20:57:40.090720  160802 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0728 20:57:40.090790  160802 ssh_runner.go:195] Run: containerd --version
	I0728 20:57:40.127526  160802 ssh_runner.go:195] Run: containerd --version
	I0728 20:57:40.215915  160802 out.go:177] * Preparing Kubernetes v1.24.3 on containerd 1.6.6 ...
	I0728 20:57:40.321010  160802 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220728205630-9812 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0728 20:57:40.371607  160802 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0728 20:57:40.378442  160802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 20:57:40.399556  160802 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0728 20:57:40.400845  160802 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0728 20:57:40.400931  160802 ssh_runner.go:195] Run: sudo crictl images --output json
	I0728 20:57:40.439928  160802 containerd.go:543] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.3". assuming images are not preloaded.
	I0728 20:57:40.440011  160802 ssh_runner.go:195] Run: which lz4
	I0728 20:57:40.444389  160802 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0728 20:57:40.449051  160802 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0728 20:57:40.449090  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (447643024 bytes)
	I0728 20:57:41.779435  160802 containerd.go:490] Took 1.335091 seconds to copy over tarball
	I0728 20:57:41.779512  160802 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0728 20:57:46.119076  160802 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.339523499s)
	I0728 20:57:46.119107  160802 containerd.go:497] Took 4.339639 seconds t extract the tarball
	I0728 20:57:46.119121  160802 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0728 20:57:46.268120  160802 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 20:57:46.361829  160802 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 20:57:46.611465  160802 ssh_runner.go:195] Run: sudo crictl images --output json
	I0728 20:57:46.660449  160802 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.3 k8s.gcr.io/kube-controller-manager:v1.24.3 k8s.gcr.io/kube-scheduler:v1.24.3 k8s.gcr.io/kube-proxy:v1.24.3 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0728 20:57:46.660535  160802 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 20:57:46.660535  160802 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.3
	I0728 20:57:46.660640  160802 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I0728 20:57:46.660754  160802 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.3
	I0728 20:57:46.660778  160802 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0728 20:57:46.660844  160802 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.3
	I0728 20:57:46.660759  160802 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I0728 20:57:46.660974  160802 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.3
	I0728 20:57:46.662604  160802 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.3: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.3
	I0728 20:57:46.662671  160802 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 20:57:46.662721  160802 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I0728 20:57:46.662925  160802 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0728 20:57:46.662857  160802 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.3: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.3
	I0728 20:57:46.662985  160802 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I0728 20:57:46.662608  160802 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.3: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.3
	I0728 20:57:46.663093  160802 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.3: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.3
	I0728 20:57:47.155423  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I0728 20:57:47.162590  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.3"
	I0728 20:57:47.167294  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.3"
	I0728 20:57:47.198831  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.3"
	I0728 20:57:47.207411  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I0728 20:57:47.207767  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.3"
	I0728 20:57:47.210394  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I0728 20:57:47.535745  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0728 20:57:48.030433  160802 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0728 20:57:48.073678  160802 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I0728 20:57:48.073742  160802 ssh_runner.go:195] Run: which crictl
	I0728 20:57:48.043910  160802 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.3" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.3" does not exist at hash "586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f" in container runtime
	I0728 20:57:48.073854  160802 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.3
	I0728 20:57:48.073882  160802 ssh_runner.go:195] Run: which crictl
	I0728 20:57:48.120097  160802 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.3" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.3" does not exist at hash "3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0" in container runtime
	I0728 20:57:48.120164  160802 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.3
	I0728 20:57:48.120209  160802 ssh_runner.go:195] Run: which crictl
	I0728 20:57:48.264494  160802 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.3": (1.065614646s)
	I0728 20:57:48.264546  160802 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.3" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.3" does not exist at hash "2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302" in container runtime
	I0728 20:57:48.264576  160802 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.3
	I0728 20:57:48.264595  160802 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7": (1.057140613s)
	I0728 20:57:48.264623  160802 ssh_runner.go:195] Run: which crictl
	I0728 20:57:48.264649  160802 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0728 20:57:48.264674  160802 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.3": (1.056873554s)
	I0728 20:57:48.264692  160802 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I0728 20:57:48.264727  160802 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6": (1.054299086s)
	I0728 20:57:48.264750  160802 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0728 20:57:48.264759  160802 ssh_runner.go:195] Run: which crictl
	I0728 20:57:48.264774  160802 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0728 20:57:48.264801  160802 ssh_runner.go:195] Run: which crictl
	I0728 20:57:48.264700  160802 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.3" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.3" does not exist at hash "d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db" in container runtime
	I0728 20:57:48.264833  160802 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.3
	I0728 20:57:48.264857  160802 ssh_runner.go:195] Run: which crictl
	I0728 20:57:48.360890  160802 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0728 20:57:48.360937  160802 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 20:57:48.360980  160802 ssh_runner.go:195] Run: which crictl
	I0728 20:57:48.361007  160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.3
	I0728 20:57:48.361085  160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I0728 20:57:48.361101  160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.3
	I0728 20:57:48.361122  160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.3
	I0728 20:57:48.361189  160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I0728 20:57:48.361235  160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.3
	I0728 20:57:48.361308  160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I0728 20:57:49.192141  160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3
	I0728 20:57:49.192251  160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.3
	I0728 20:57:49.192342  160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I0728 20:57:49.192393  160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0728 20:57:49.192584  160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3
	I0728 20:57:49.192656  160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.3
	I0728 20:57:49.192769  160802 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 20:57:49.192863  160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3
	I0728 20:57:49.192916  160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.3
	I0728 20:57:49.197188  160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I0728 20:57:49.197309  160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0728 20:57:49.197385  160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I0728 20:57:49.197466  160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0728 20:57:49.197528  160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3
	I0728 20:57:49.197586  160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.3
	I0728 20:57:49.265698  160802 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0728 20:57:49.265835  160802 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0728 20:57:49.265940  160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.24.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.24.3': No such file or directory
	I0728 20:57:49.265968  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3 --> /var/lib/minikube/images/kube-controller-manager_v1.24.3 (31038464 bytes)
	I0728 20:57:49.266038  160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0728 20:57:49.266057  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0728 20:57:49.266131  160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.24.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.24.3': No such file or directory
	I0728 20:57:49.266142  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3 --> /var/lib/minikube/images/kube-proxy_v1.24.3 (39518208 bytes)
	I0728 20:57:49.266199  160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.24.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.24.3': No such file or directory
	I0728 20:57:49.266258  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3 --> /var/lib/minikube/images/kube-scheduler_v1.24.3 (15491584 bytes)
	I0728 20:57:49.266324  160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.24.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.24.3': No such file or directory
	I0728 20:57:49.266339  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3 --> /var/lib/minikube/images/kube-apiserver_v1.24.3 (33799168 bytes)
	I0728 20:57:49.266400  160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0728 20:57:49.266424  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I0728 20:57:49.266491  160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0728 20:57:49.266513  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0728 20:57:49.279837  160802 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0728 20:57:49.279880  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0728 20:57:49.371984  160802 containerd.go:227] Loading image: /var/lib/minikube/images/pause_3.7
	I0728 20:57:49.372079  160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I0728 20:57:49.703157  160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I0728 20:57:49.703214  160802 containerd.go:227] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0728 20:57:49.703268  160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0728 20:57:53.182209  160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (3.478901177s)
	I0728 20:57:53.182240  160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0728 20:57:53.182266  160802 containerd.go:227] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0728 20:57:53.182312  160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I0728 20:57:54.477690  160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.295324483s)
	I0728 20:57:54.477732  160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I0728 20:57:54.477769  160802 containerd.go:227] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.3
	I0728 20:57:54.477832  160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.3
	I0728 20:57:56.764402  160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.3: (2.286535978s)
	I0728 20:57:56.764440  160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3 from cache
	I0728 20:57:56.764474  160802 containerd.go:227] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.3
	I0728 20:57:56.764543  160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.3
	I0728 20:57:58.756812  160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.3: (1.992235085s)
	I0728 20:57:58.756851  160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3 from cache
	I0728 20:57:58.756878  160802 containerd.go:227] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.3
	I0728 20:57:58.756925  160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.3
	I0728 20:58:00.824477  160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.3: (2.067519209s)
	I0728 20:58:00.824511  160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3 from cache
	I0728 20:58:00.824535  160802 containerd.go:227] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.3
	I0728 20:58:00.824582  160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.3
	I0728 20:58:06.955653  160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.3: (6.131039123s)
	I0728 20:58:06.955693  160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3 from cache
	I0728 20:58:06.955736  160802 containerd.go:227] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0728 20:58:06.955816  160802 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I0728 20:58:12.820155  160802 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (5.864301374s)
	I0728 20:58:12.820191  160802 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
	I0728 20:58:12.820215  160802 cache_images.go:123] Successfully loaded all cached images
	I0728 20:58:12.820221  160802 cache_images.go:92] LoadImages completed in 26.159738226s
	I0728 20:58:12.820281  160802 ssh_runner.go:195] Run: sudo crictl info
	I0728 20:58:12.857260  160802 cni.go:95] Creating CNI manager for ""
	I0728 20:58:12.857288  160802 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0728 20:58:12.857302  160802 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 20:58:12.857314  160802 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220728205630-9812 NodeName:kubernetes-upgrade-20220728205630-9812 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cg
roupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 20:58:12.857459  160802 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-20220728205630-9812"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 20:58:12.857542  160802 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-20220728205630-9812 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220728205630-9812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 20:58:12.857591  160802 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 20:58:12.866266  160802 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 20:58:12.866340  160802 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 20:58:12.875028  160802 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
	I0728 20:58:12.890926  160802 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 20:58:12.907372  160802 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0728 20:58:12.937483  160802 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0728 20:58:12.942247  160802 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 20:58:12.958278  160802 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812 for IP: 192.168.67.2
	I0728 20:58:12.958405  160802 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 20:58:12.958465  160802 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 20:58:12.958574  160802 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/client.key
	I0728 20:58:12.958656  160802 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/apiserver.key.c7fa3a9e
	I0728 20:58:12.958720  160802 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/proxy-client.key
	I0728 20:58:12.958857  160802 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812.pem (1338 bytes)
	W0728 20:58:12.959051  160802 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812_empty.pem, impossibly tiny 0 bytes
	I0728 20:58:12.959082  160802 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 20:58:12.959123  160802 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1078 bytes)
	I0728 20:58:12.959177  160802 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 20:58:12.959226  160802 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 20:58:12.959290  160802 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem (1708 bytes)
	I0728 20:58:12.960147  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 20:58:12.986028  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 20:58:13.007015  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 20:58:13.037224  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 20:58:13.064120  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 20:58:13.084718  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0728 20:58:13.105796  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 20:58:13.140283  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0728 20:58:13.164951  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812.pem --> /usr/share/ca-certificates/9812.pem (1338 bytes)
	I0728 20:58:13.186809  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem --> /usr/share/ca-certificates/98122.pem (1708 bytes)
	I0728 20:58:13.208400  160802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 20:58:13.239182  160802 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 20:58:13.259410  160802 ssh_runner.go:195] Run: openssl version
	I0728 20:58:13.265177  160802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9812.pem && ln -fs /usr/share/ca-certificates/9812.pem /etc/ssl/certs/9812.pem"
	I0728 20:58:13.274469  160802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9812.pem
	I0728 20:58:13.278647  160802 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 20:32 /usr/share/ca-certificates/9812.pem
	I0728 20:58:13.278718  160802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9812.pem
	I0728 20:58:13.284850  160802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9812.pem /etc/ssl/certs/51391683.0"
	I0728 20:58:13.293605  160802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98122.pem && ln -fs /usr/share/ca-certificates/98122.pem /etc/ssl/certs/98122.pem"
	I0728 20:58:13.302462  160802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98122.pem
	I0728 20:58:13.306595  160802 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 20:32 /usr/share/ca-certificates/98122.pem
	I0728 20:58:13.306655  160802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98122.pem
	I0728 20:58:13.314598  160802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98122.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 20:58:13.326004  160802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 20:58:13.338581  160802 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 20:58:13.343850  160802 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0728 20:58:13.343923  160802 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 20:58:13.352223  160802 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 20:58:13.363049  160802 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220728205630-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220728205630-9812 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 20:58:13.363159  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0728 20:58:13.363205  160802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0728 20:58:13.391718  160802 cri.go:87] found id: ""
	I0728 20:58:13.391791  160802 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 20:58:13.400607  160802 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 20:58:13.400640  160802 kubeadm.go:626] restartCluster start
	I0728 20:58:13.400688  160802 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 20:58:13.409284  160802 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 20:58:13.409792  160802 kubeconfig.go:116] verify returned: extract IP: "kubernetes-upgrade-20220728205630-9812" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 20:58:13.409950  160802 kubeconfig.go:127] "kubernetes-upgrade-20220728205630-9812" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 20:58:13.410285  160802 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mka3434310bc9890bf6f7ac8ad0a69157716fb18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 20:58:13.411295  160802 kapi.go:59] client config for kubernetes-upgrade-20220728205630-9812: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728205630-9812/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/
profiles/kubernetes-upgrade-20220728205630-9812/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173e480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 20:58:13.411974  160802 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 20:58:13.423206  160802 kubeadm.go:593] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-07-28 20:56:48.392908105 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-07-28 20:58:12.932926410 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.67.2
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.67.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-20220728205630-9812
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.24.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0728 20:58:13.423255  160802 kubeadm.go:1092] stopping kube-system containers ...
	I0728 20:58:13.423270  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0728 20:58:13.423348  160802 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0728 20:58:13.473793  160802 cri.go:87] found id: ""
	I0728 20:58:13.473862  160802 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 20:58:13.485672  160802 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 20:58:13.494399  160802 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5755 Jul 28 20:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5791 Jul 28 20:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5955 Jul 28 20:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5739 Jul 28 20:56 /etc/kubernetes/scheduler.conf
	
	I0728 20:58:13.494459  160802 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 20:58:13.503458  160802 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 20:58:13.514285  160802 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 20:58:13.525686  160802 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 20:58:13.537391  160802 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 20:58:13.549491  160802 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 20:58:13.549527  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 20:58:13.599044  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 20:58:14.058149  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 20:58:14.280463  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 20:58:14.344454  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 20:58:14.399281  160802 api_server.go:51] waiting for apiserver process to appear ...
	I0728 20:58:14.399364  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:14.920590  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:15.420026  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:15.920602  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:16.419998  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:16.920816  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:17.420237  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:17.920985  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:18.420113  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:18.920621  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:19.420630  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:19.920026  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:20.420202  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:20.920659  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:21.423007  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:21.920093  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:22.419968  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:22.920289  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:23.420526  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:23.920050  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:24.420011  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:24.920801  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:25.420093  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:25.920316  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:26.420382  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:26.920066  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:27.420968  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:27.920877  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:28.420216  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:28.919971  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:29.420285  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:29.920673  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:30.420755  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:30.919959  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:31.420199  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:31.920995  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:32.419973  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:32.920641  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:33.420167  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:33.920055  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:34.420665  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:34.920160  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:35.420181  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:35.920188  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:36.420351  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:36.920313  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:37.420026  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:37.920666  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:38.420282  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:38.920035  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:39.420431  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:39.920870  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:40.420909  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:40.920319  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:41.420805  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:41.920637  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:42.420645  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:42.920288  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:43.420992  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:43.920160  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:44.420867  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:44.920354  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:45.420769  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:45.920598  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:46.420272  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:46.920942  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:47.420411  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:47.920934  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:48.420775  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:48.920710  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:49.420908  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:49.920710  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:50.420755  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:50.920219  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:51.420042  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:51.920791  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:52.420015  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:52.920832  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:53.420302  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:53.920396  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:54.420913  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:54.920144  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:55.420979  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:55.920808  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:56.420599  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:56.920767  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:57.420403  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:57.920243  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:58.420793  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:58.920556  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:59.420248  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:58:59.920290  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:00.420775  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:00.920254  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:01.420633  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:01.920794  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:02.420958  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:02.920899  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:03.420311  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:03.920739  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:04.421016  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:04.920414  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:05.420561  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:05.920409  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:06.420727  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:06.920102  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:07.420070  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:07.920559  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:08.420780  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:08.920109  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:09.420050  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:09.920826  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:10.420960  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:10.920263  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:11.420329  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:11.920838  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:12.420168  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:12.920406  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:13.420519  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:13.920440  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:14.420568  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 20:59:14.420674  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 20:59:14.452004  160802 cri.go:87] found id: ""
	I0728 20:59:14.452040  160802 logs.go:274] 0 containers: []
	W0728 20:59:14.452052  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 20:59:14.452062  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 20:59:14.452138  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 20:59:14.482244  160802 cri.go:87] found id: ""
	I0728 20:59:14.482270  160802 logs.go:274] 0 containers: []
	W0728 20:59:14.482276  160802 logs.go:276] No container was found matching "etcd"
	I0728 20:59:14.482283  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 20:59:14.482337  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 20:59:14.510585  160802 cri.go:87] found id: ""
	I0728 20:59:14.510618  160802 logs.go:274] 0 containers: []
	W0728 20:59:14.510629  160802 logs.go:276] No container was found matching "coredns"
	I0728 20:59:14.510639  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 20:59:14.510714  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 20:59:14.539772  160802 cri.go:87] found id: ""
	I0728 20:59:14.539803  160802 logs.go:274] 0 containers: []
	W0728 20:59:14.539817  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 20:59:14.539826  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 20:59:14.539894  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 20:59:14.568212  160802 cri.go:87] found id: ""
	I0728 20:59:14.568243  160802 logs.go:274] 0 containers: []
	W0728 20:59:14.568251  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 20:59:14.568260  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 20:59:14.568324  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 20:59:14.598386  160802 cri.go:87] found id: ""
	I0728 20:59:14.598416  160802 logs.go:274] 0 containers: []
	W0728 20:59:14.598425  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 20:59:14.598433  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 20:59:14.598495  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 20:59:14.628899  160802 cri.go:87] found id: ""
	I0728 20:59:14.628929  160802 logs.go:274] 0 containers: []
	W0728 20:59:14.628939  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 20:59:14.628947  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 20:59:14.629005  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 20:59:14.660805  160802 cri.go:87] found id: ""
	I0728 20:59:14.660839  160802 logs.go:274] 0 containers: []
	W0728 20:59:14.660849  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 20:59:14.660860  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 20:59:14.660876  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 20:59:14.737395  160802 logs.go:138] Found kubelet problem: Jul 28 20:59:14 kubernetes-upgrade-20220728205630-9812 kubelet[2333]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 20:59:14.787836  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 20:59:14.787880  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 20:59:14.804357  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 20:59:14.804410  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 20:59:14.864241  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 20:59:14.864270  160802 logs.go:123] Gathering logs for containerd ...
	I0728 20:59:14.864281  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 20:59:14.901267  160802 logs.go:123] Gathering logs for container status ...
	I0728 20:59:14.901323  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 20:59:14.931359  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 20:59:14.931389  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 20:59:14.931506  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 20:59:14.931524  160802 out.go:239]   Jul 28 20:59:14 kubernetes-upgrade-20220728205630-9812 kubelet[2333]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 20:59:14 kubernetes-upgrade-20220728205630-9812 kubelet[2333]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 20:59:14.931540  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 20:59:14.931548  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:59:24.932473  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:25.420239  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 20:59:25.420335  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 20:59:25.450354  160802 cri.go:87] found id: ""
	I0728 20:59:25.450389  160802 logs.go:274] 0 containers: []
	W0728 20:59:25.450398  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 20:59:25.450407  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 20:59:25.450466  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 20:59:25.477734  160802 cri.go:87] found id: ""
	I0728 20:59:25.477767  160802 logs.go:274] 0 containers: []
	W0728 20:59:25.477777  160802 logs.go:276] No container was found matching "etcd"
	I0728 20:59:25.477785  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 20:59:25.477844  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 20:59:25.503937  160802 cri.go:87] found id: ""
	I0728 20:59:25.503968  160802 logs.go:274] 0 containers: []
	W0728 20:59:25.503976  160802 logs.go:276] No container was found matching "coredns"
	I0728 20:59:25.503984  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 20:59:25.504040  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 20:59:25.531863  160802 cri.go:87] found id: ""
	I0728 20:59:25.531900  160802 logs.go:274] 0 containers: []
	W0728 20:59:25.531907  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 20:59:25.531914  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 20:59:25.531963  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 20:59:25.561117  160802 cri.go:87] found id: ""
	I0728 20:59:25.561149  160802 logs.go:274] 0 containers: []
	W0728 20:59:25.561158  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 20:59:25.561166  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 20:59:25.561224  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 20:59:25.589069  160802 cri.go:87] found id: ""
	I0728 20:59:25.589103  160802 logs.go:274] 0 containers: []
	W0728 20:59:25.589113  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 20:59:25.589121  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 20:59:25.589184  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 20:59:25.621490  160802 cri.go:87] found id: ""
	I0728 20:59:25.621519  160802 logs.go:274] 0 containers: []
	W0728 20:59:25.621529  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 20:59:25.621539  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 20:59:25.621596  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 20:59:25.653547  160802 cri.go:87] found id: ""
	I0728 20:59:25.653578  160802 logs.go:274] 0 containers: []
	W0728 20:59:25.653587  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 20:59:25.653598  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 20:59:25.653615  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 20:59:25.701614  160802 logs.go:138] Found kubelet problem: Jul 28 20:59:25 kubernetes-upgrade-20220728205630-9812 kubelet[2713]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 20:59:25.747589  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 20:59:25.747635  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 20:59:25.765145  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 20:59:25.765184  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 20:59:25.823688  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 20:59:25.823717  160802 logs.go:123] Gathering logs for containerd ...
	I0728 20:59:25.823731  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 20:59:25.861132  160802 logs.go:123] Gathering logs for container status ...
	I0728 20:59:25.861181  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 20:59:25.892229  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 20:59:25.892257  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 20:59:25.892385  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 20:59:25.892401  160802 out.go:239]   Jul 28 20:59:25 kubernetes-upgrade-20220728205630-9812 kubelet[2713]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 20:59:25 kubernetes-upgrade-20220728205630-9812 kubelet[2713]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 20:59:25.892406  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 20:59:25.892411  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:59:35.894599  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:35.920357  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 20:59:35.920505  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 20:59:35.948405  160802 cri.go:87] found id: ""
	I0728 20:59:35.948430  160802 logs.go:274] 0 containers: []
	W0728 20:59:35.948436  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 20:59:35.948443  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 20:59:35.948508  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 20:59:35.976431  160802 cri.go:87] found id: ""
	I0728 20:59:35.976462  160802 logs.go:274] 0 containers: []
	W0728 20:59:35.976470  160802 logs.go:276] No container was found matching "etcd"
	I0728 20:59:35.976477  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 20:59:35.976538  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 20:59:36.005563  160802 cri.go:87] found id: ""
	I0728 20:59:36.005589  160802 logs.go:274] 0 containers: []
	W0728 20:59:36.005595  160802 logs.go:276] No container was found matching "coredns"
	I0728 20:59:36.005602  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 20:59:36.005649  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 20:59:36.033705  160802 cri.go:87] found id: ""
	I0728 20:59:36.033734  160802 logs.go:274] 0 containers: []
	W0728 20:59:36.033740  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 20:59:36.033745  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 20:59:36.033799  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 20:59:36.062919  160802 cri.go:87] found id: ""
	I0728 20:59:36.062953  160802 logs.go:274] 0 containers: []
	W0728 20:59:36.062962  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 20:59:36.062972  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 20:59:36.063034  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 20:59:36.090984  160802 cri.go:87] found id: ""
	I0728 20:59:36.091021  160802 logs.go:274] 0 containers: []
	W0728 20:59:36.091031  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 20:59:36.091040  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 20:59:36.091102  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 20:59:36.119742  160802 cri.go:87] found id: ""
	I0728 20:59:36.119775  160802 logs.go:274] 0 containers: []
	W0728 20:59:36.119784  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 20:59:36.119797  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 20:59:36.119858  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 20:59:36.156979  160802 cri.go:87] found id: ""
	I0728 20:59:36.157012  160802 logs.go:274] 0 containers: []
	W0728 20:59:36.157022  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 20:59:36.157035  160802 logs.go:123] Gathering logs for containerd ...
	I0728 20:59:36.157051  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 20:59:36.201189  160802 logs.go:123] Gathering logs for container status ...
	I0728 20:59:36.201245  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 20:59:36.239188  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 20:59:36.239224  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 20:59:36.290569  160802 logs.go:138] Found kubelet problem: Jul 28 20:59:36 kubernetes-upgrade-20220728205630-9812 kubelet[3010]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 20:59:36.339953  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 20:59:36.339992  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 20:59:36.359173  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 20:59:36.359208  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 20:59:36.415878  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 20:59:36.415910  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 20:59:36.415924  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 20:59:36.416070  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 20:59:36.416088  160802 out.go:239]   Jul 28 20:59:36 kubernetes-upgrade-20220728205630-9812 kubelet[3010]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 20:59:36 kubernetes-upgrade-20220728205630-9812 kubelet[3010]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 20:59:36.416097  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 20:59:36.416102  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:59:46.417052  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:46.920709  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 20:59:46.920800  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 20:59:46.958993  160802 cri.go:87] found id: ""
	I0728 20:59:46.959021  160802 logs.go:274] 0 containers: []
	W0728 20:59:46.959029  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 20:59:46.959038  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 20:59:46.959113  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 20:59:46.991959  160802 cri.go:87] found id: ""
	I0728 20:59:46.991989  160802 logs.go:274] 0 containers: []
	W0728 20:59:46.992002  160802 logs.go:276] No container was found matching "etcd"
	I0728 20:59:46.992009  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 20:59:46.992069  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 20:59:47.023763  160802 cri.go:87] found id: ""
	I0728 20:59:47.023796  160802 logs.go:274] 0 containers: []
	W0728 20:59:47.023806  160802 logs.go:276] No container was found matching "coredns"
	I0728 20:59:47.023816  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 20:59:47.023876  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 20:59:47.060629  160802 cri.go:87] found id: ""
	I0728 20:59:47.060659  160802 logs.go:274] 0 containers: []
	W0728 20:59:47.060668  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 20:59:47.060677  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 20:59:47.060733  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 20:59:47.091513  160802 cri.go:87] found id: ""
	I0728 20:59:47.091546  160802 logs.go:274] 0 containers: []
	W0728 20:59:47.091557  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 20:59:47.091566  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 20:59:47.091628  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 20:59:47.125658  160802 cri.go:87] found id: ""
	I0728 20:59:47.125689  160802 logs.go:274] 0 containers: []
	W0728 20:59:47.125698  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 20:59:47.125707  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 20:59:47.125769  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 20:59:47.161864  160802 cri.go:87] found id: ""
	I0728 20:59:47.161895  160802 logs.go:274] 0 containers: []
	W0728 20:59:47.161905  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 20:59:47.161913  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 20:59:47.161968  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 20:59:47.196227  160802 cri.go:87] found id: ""
	I0728 20:59:47.196259  160802 logs.go:274] 0 containers: []
	W0728 20:59:47.196268  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 20:59:47.196281  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 20:59:47.196296  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 20:59:47.269877  160802 logs.go:138] Found kubelet problem: Jul 28 20:59:46 kubernetes-upgrade-20220728205630-9812 kubelet[3231]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 20:59:47.337001  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 20:59:47.337054  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 20:59:47.359286  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 20:59:47.359345  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 20:59:47.440070  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 20:59:47.440099  160802 logs.go:123] Gathering logs for containerd ...
	I0728 20:59:47.440114  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 20:59:47.493104  160802 logs.go:123] Gathering logs for container status ...
	I0728 20:59:47.493146  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 20:59:47.528049  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 20:59:47.528076  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 20:59:47.528204  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 20:59:47.528224  160802 out.go:239]   Jul 28 20:59:46 kubernetes-upgrade-20220728205630-9812 kubelet[3231]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 20:59:46 kubernetes-upgrade-20220728205630-9812 kubelet[3231]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 20:59:47.528233  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 20:59:47.528241  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:59:57.529695  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:59:57.920472  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 20:59:57.920603  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 20:59:57.960187  160802 cri.go:87] found id: ""
	I0728 20:59:57.960222  160802 logs.go:274] 0 containers: []
	W0728 20:59:57.960231  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 20:59:57.960240  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 20:59:57.960304  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 20:59:58.001595  160802 cri.go:87] found id: ""
	I0728 20:59:58.001626  160802 logs.go:274] 0 containers: []
	W0728 20:59:58.001635  160802 logs.go:276] No container was found matching "etcd"
	I0728 20:59:58.001644  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 20:59:58.001717  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 20:59:58.041548  160802 cri.go:87] found id: ""
	I0728 20:59:58.041577  160802 logs.go:274] 0 containers: []
	W0728 20:59:58.041586  160802 logs.go:276] No container was found matching "coredns"
	I0728 20:59:58.041594  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 20:59:58.041661  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 20:59:58.085539  160802 cri.go:87] found id: ""
	I0728 20:59:58.085567  160802 logs.go:274] 0 containers: []
	W0728 20:59:58.085576  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 20:59:58.085585  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 20:59:58.085651  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 20:59:58.123384  160802 cri.go:87] found id: ""
	I0728 20:59:58.123413  160802 logs.go:274] 0 containers: []
	W0728 20:59:58.123423  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 20:59:58.123432  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 20:59:58.123492  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 20:59:58.163432  160802 cri.go:87] found id: ""
	I0728 20:59:58.163461  160802 logs.go:274] 0 containers: []
	W0728 20:59:58.163470  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 20:59:58.163480  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 20:59:58.163548  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 20:59:58.203457  160802 cri.go:87] found id: ""
	I0728 20:59:58.203487  160802 logs.go:274] 0 containers: []
	W0728 20:59:58.203497  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 20:59:58.203507  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 20:59:58.203566  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 20:59:58.241148  160802 cri.go:87] found id: ""
	I0728 20:59:58.241179  160802 logs.go:274] 0 containers: []
	W0728 20:59:58.241190  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 20:59:58.241203  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 20:59:58.241219  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 20:59:58.311357  160802 logs.go:138] Found kubelet problem: Jul 28 20:59:57 kubernetes-upgrade-20220728205630-9812 kubelet[3525]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 20:59:58.374801  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 20:59:58.374838  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 20:59:58.395136  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 20:59:58.395174  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 20:59:58.462065  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 20:59:58.462097  160802 logs.go:123] Gathering logs for containerd ...
	I0728 20:59:58.462110  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 20:59:58.517504  160802 logs.go:123] Gathering logs for container status ...
	I0728 20:59:58.517558  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 20:59:58.559603  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 20:59:58.559641  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 20:59:58.559791  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 20:59:58.559818  160802 out.go:239]   Jul 28 20:59:57 kubernetes-upgrade-20220728205630-9812 kubelet[3525]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 20:59:57 kubernetes-upgrade-20220728205630-9812 kubelet[3525]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 20:59:58.559827  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 20:59:58.559837  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:00:08.560745  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:00:08.920525  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:00:08.920620  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:00:08.953176  160802 cri.go:87] found id: ""
	I0728 21:00:08.953205  160802 logs.go:274] 0 containers: []
	W0728 21:00:08.953215  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:00:08.953222  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:00:08.953285  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:00:08.980208  160802 cri.go:87] found id: ""
	I0728 21:00:08.980237  160802 logs.go:274] 0 containers: []
	W0728 21:00:08.980244  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:00:08.980252  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:00:08.980318  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:00:09.007253  160802 cri.go:87] found id: ""
	I0728 21:00:09.007279  160802 logs.go:274] 0 containers: []
	W0728 21:00:09.007287  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:00:09.007293  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:00:09.007357  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:00:09.034922  160802 cri.go:87] found id: ""
	I0728 21:00:09.034958  160802 logs.go:274] 0 containers: []
	W0728 21:00:09.034965  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:00:09.034971  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:00:09.035023  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:00:09.067542  160802 cri.go:87] found id: ""
	I0728 21:00:09.067570  160802 logs.go:274] 0 containers: []
	W0728 21:00:09.067577  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:00:09.067584  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:00:09.067640  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:00:09.100488  160802 cri.go:87] found id: ""
	I0728 21:00:09.100603  160802 logs.go:274] 0 containers: []
	W0728 21:00:09.100621  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:00:09.100632  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:00:09.100703  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:00:09.134574  160802 cri.go:87] found id: ""
	I0728 21:00:09.134607  160802 logs.go:274] 0 containers: []
	W0728 21:00:09.134621  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:00:09.134630  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:00:09.134692  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:00:09.167350  160802 cri.go:87] found id: ""
	I0728 21:00:09.167383  160802 logs.go:274] 0 containers: []
	W0728 21:00:09.167392  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:00:09.167405  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:00:09.167424  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:00:09.185295  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:00:09.185345  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:00:09.264782  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:00:09.264814  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:00:09.264826  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:00:09.307311  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:00:09.307349  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:00:09.338585  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:00:09.338618  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:00:09.386957  160802 logs.go:138] Found kubelet problem: Jul 28 21:00:09 kubernetes-upgrade-20220728205630-9812 kubelet[3892]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:00:09.433592  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:00:09.433631  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 21:00:09.433760  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 21:00:09.433780  160802 out.go:239]   Jul 28 21:00:09 kubernetes-upgrade-20220728205630-9812 kubelet[3892]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 21:00:09 kubernetes-upgrade-20220728205630-9812 kubelet[3892]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:00:09.433786  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:00:09.433794  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:00:19.435239  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:00:19.920821  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:00:19.920895  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:00:19.948271  160802 cri.go:87] found id: ""
	I0728 21:00:19.948304  160802 logs.go:274] 0 containers: []
	W0728 21:00:19.948313  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:00:19.948319  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:00:19.948373  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:00:19.974693  160802 cri.go:87] found id: ""
	I0728 21:00:19.974721  160802 logs.go:274] 0 containers: []
	W0728 21:00:19.974731  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:00:19.974740  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:00:19.974794  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:00:20.000482  160802 cri.go:87] found id: ""
	I0728 21:00:20.000512  160802 logs.go:274] 0 containers: []
	W0728 21:00:20.000519  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:00:20.000525  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:00:20.000572  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:00:20.027484  160802 cri.go:87] found id: ""
	I0728 21:00:20.027518  160802 logs.go:274] 0 containers: []
	W0728 21:00:20.027527  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:00:20.027535  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:00:20.027592  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:00:20.055223  160802 cri.go:87] found id: ""
	I0728 21:00:20.055266  160802 logs.go:274] 0 containers: []
	W0728 21:00:20.055274  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:00:20.055280  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:00:20.055337  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:00:20.084863  160802 cri.go:87] found id: ""
	I0728 21:00:20.084887  160802 logs.go:274] 0 containers: []
	W0728 21:00:20.084894  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:00:20.084901  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:00:20.084958  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:00:20.110692  160802 cri.go:87] found id: ""
	I0728 21:00:20.110721  160802 logs.go:274] 0 containers: []
	W0728 21:00:20.110727  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:00:20.110734  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:00:20.110780  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:00:20.137519  160802 cri.go:87] found id: ""
	I0728 21:00:20.137544  160802 logs.go:274] 0 containers: []
	W0728 21:00:20.137550  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:00:20.137560  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:00:20.137577  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:00:20.193399  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:00:20.193426  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:00:20.193440  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:00:20.231887  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:00:20.231932  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:00:20.262413  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:00:20.262441  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:00:20.314318  160802 logs.go:138] Found kubelet problem: Jul 28 21:00:19 kubernetes-upgrade-20220728205630-9812 kubelet[4116]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:00:20.363729  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:00:20.363775  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:00:20.380775  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:00:20.380812  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 21:00:20.380979  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 21:00:20.381036  160802 out.go:239]   Jul 28 21:00:19 kubernetes-upgrade-20220728205630-9812 kubelet[4116]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 21:00:19 kubernetes-upgrade-20220728205630-9812 kubelet[4116]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:00:20.381048  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:00:20.381056  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:00:30.382727  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:00:30.420532  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:00:30.420616  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:00:30.446552  160802 cri.go:87] found id: ""
	I0728 21:00:30.446580  160802 logs.go:274] 0 containers: []
	W0728 21:00:30.446586  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:00:30.446595  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:00:30.446676  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:00:30.473419  160802 cri.go:87] found id: ""
	I0728 21:00:30.473448  160802 logs.go:274] 0 containers: []
	W0728 21:00:30.473456  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:00:30.473463  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:00:30.473519  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:00:30.498960  160802 cri.go:87] found id: ""
	I0728 21:00:30.498997  160802 logs.go:274] 0 containers: []
	W0728 21:00:30.499004  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:00:30.499010  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:00:30.499068  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:00:30.524213  160802 cri.go:87] found id: ""
	I0728 21:00:30.524240  160802 logs.go:274] 0 containers: []
	W0728 21:00:30.524247  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:00:30.524253  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:00:30.524313  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:00:30.551794  160802 cri.go:87] found id: ""
	I0728 21:00:30.551823  160802 logs.go:274] 0 containers: []
	W0728 21:00:30.551830  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:00:30.551837  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:00:30.551889  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:00:30.583852  160802 cri.go:87] found id: ""
	I0728 21:00:30.583884  160802 logs.go:274] 0 containers: []
	W0728 21:00:30.583893  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:00:30.583906  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:00:30.583965  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:00:30.608969  160802 cri.go:87] found id: ""
	I0728 21:00:30.608993  160802 logs.go:274] 0 containers: []
	W0728 21:00:30.609002  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:00:30.609014  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:00:30.609067  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:00:30.635438  160802 cri.go:87] found id: ""
	I0728 21:00:30.635468  160802 logs.go:274] 0 containers: []
	W0728 21:00:30.635477  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:00:30.635487  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:00:30.635503  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:00:30.687716  160802 logs.go:138] Found kubelet problem: Jul 28 21:00:30 kubernetes-upgrade-20220728205630-9812 kubelet[4419]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:00:30.746740  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:00:30.746787  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:00:30.763153  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:00:30.763203  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:00:30.815641  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:00:30.815676  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:00:30.815692  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:00:30.852829  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:00:30.852877  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:00:30.883637  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:00:30.883663  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 21:00:30.883773  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 21:00:30.883789  160802 out.go:239]   Jul 28 21:00:30 kubernetes-upgrade-20220728205630-9812 kubelet[4419]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 21:00:30 kubernetes-upgrade-20220728205630-9812 kubelet[4419]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:00:30.883807  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:00:30.883815  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:00:40.885069  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:00:40.920063  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:00:40.920165  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:00:40.946715  160802 cri.go:87] found id: ""
	I0728 21:00:40.946747  160802 logs.go:274] 0 containers: []
	W0728 21:00:40.946755  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:00:40.946762  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:00:40.946815  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:00:40.974609  160802 cri.go:87] found id: ""
	I0728 21:00:40.974635  160802 logs.go:274] 0 containers: []
	W0728 21:00:40.974644  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:00:40.974652  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:00:40.974707  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:00:41.000572  160802 cri.go:87] found id: ""
	I0728 21:00:41.000600  160802 logs.go:274] 0 containers: []
	W0728 21:00:41.000607  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:00:41.000614  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:00:41.000672  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:00:41.026665  160802 cri.go:87] found id: ""
	I0728 21:00:41.026696  160802 logs.go:274] 0 containers: []
	W0728 21:00:41.026705  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:00:41.026712  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:00:41.026769  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:00:41.052800  160802 cri.go:87] found id: ""
	I0728 21:00:41.052832  160802 logs.go:274] 0 containers: []
	W0728 21:00:41.052842  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:00:41.052851  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:00:41.052911  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:00:41.078370  160802 cri.go:87] found id: ""
	I0728 21:00:41.078396  160802 logs.go:274] 0 containers: []
	W0728 21:00:41.078403  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:00:41.078410  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:00:41.078455  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:00:41.105120  160802 cri.go:87] found id: ""
	I0728 21:00:41.105150  160802 logs.go:274] 0 containers: []
	W0728 21:00:41.105159  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:00:41.105167  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:00:41.105223  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:00:41.131835  160802 cri.go:87] found id: ""
	I0728 21:00:41.131869  160802 logs.go:274] 0 containers: []
	W0728 21:00:41.131878  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:00:41.131889  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:00:41.131904  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:00:41.183915  160802 logs.go:138] Found kubelet problem: Jul 28 21:00:40 kubernetes-upgrade-20220728205630-9812 kubelet[4708]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:00:41.230010  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:00:41.230053  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:00:41.246565  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:00:41.246613  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:00:41.301005  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:00:41.301036  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:00:41.301048  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:00:41.339281  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:00:41.339331  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:00:41.369888  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:00:41.369914  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 21:00:41.370013  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 21:00:41.370026  160802 out.go:239]   Jul 28 21:00:40 kubernetes-upgrade-20220728205630-9812 kubelet[4708]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 21:00:40 kubernetes-upgrade-20220728205630-9812 kubelet[4708]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:00:41.370035  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:00:41.370040  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:00:51.372050  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:00:51.420567  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:00:51.420678  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:00:51.447149  160802 cri.go:87] found id: ""
	I0728 21:00:51.447171  160802 logs.go:274] 0 containers: []
	W0728 21:00:51.447178  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:00:51.447185  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:00:51.447241  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:00:51.473508  160802 cri.go:87] found id: ""
	I0728 21:00:51.473539  160802 logs.go:274] 0 containers: []
	W0728 21:00:51.473547  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:00:51.473556  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:00:51.473614  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:00:51.500236  160802 cri.go:87] found id: ""
	I0728 21:00:51.500264  160802 logs.go:274] 0 containers: []
	W0728 21:00:51.500274  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:00:51.500281  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:00:51.500339  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:00:51.526468  160802 cri.go:87] found id: ""
	I0728 21:00:51.526500  160802 logs.go:274] 0 containers: []
	W0728 21:00:51.526511  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:00:51.526519  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:00:51.526568  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:00:51.552901  160802 cri.go:87] found id: ""
	I0728 21:00:51.552930  160802 logs.go:274] 0 containers: []
	W0728 21:00:51.552937  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:00:51.552954  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:00:51.553011  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:00:51.579679  160802 cri.go:87] found id: ""
	I0728 21:00:51.579709  160802 logs.go:274] 0 containers: []
	W0728 21:00:51.579715  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:00:51.579721  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:00:51.579773  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:00:51.604886  160802 cri.go:87] found id: ""
	I0728 21:00:51.604917  160802 logs.go:274] 0 containers: []
	W0728 21:00:51.604925  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:00:51.604934  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:00:51.604986  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:00:51.630094  160802 cri.go:87] found id: ""
	I0728 21:00:51.630120  160802 logs.go:274] 0 containers: []
	W0728 21:00:51.630130  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:00:51.630142  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:00:51.630158  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:00:51.676727  160802 logs.go:138] Found kubelet problem: Jul 28 21:00:51 kubernetes-upgrade-20220728205630-9812 kubelet[5005]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:00:51.722849  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:00:51.722910  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:00:51.738320  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:00:51.738355  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:00:51.792683  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:00:51.792712  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:00:51.792727  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:00:51.829025  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:00:51.829078  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:00:51.857460  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:00:51.857493  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 21:00:51.857601  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 21:00:51.857618  160802 out.go:239]   Jul 28 21:00:51 kubernetes-upgrade-20220728205630-9812 kubelet[5005]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 21:00:51 kubernetes-upgrade-20220728205630-9812 kubelet[5005]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:00:51.857629  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:00:51.857636  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:01:01.859412  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:01:01.920827  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:01:01.920908  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:01:01.946905  160802 cri.go:87] found id: ""
	I0728 21:01:01.946936  160802 logs.go:274] 0 containers: []
	W0728 21:01:01.946946  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:01:01.946955  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:01:01.947014  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:01:01.972353  160802 cri.go:87] found id: ""
	I0728 21:01:01.972377  160802 logs.go:274] 0 containers: []
	W0728 21:01:01.972384  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:01:01.972390  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:01:01.972438  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:01:02.000643  160802 cri.go:87] found id: ""
	I0728 21:01:02.000669  160802 logs.go:274] 0 containers: []
	W0728 21:01:02.000676  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:01:02.000682  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:01:02.000727  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:01:02.029158  160802 cri.go:87] found id: ""
	I0728 21:01:02.029202  160802 logs.go:274] 0 containers: []
	W0728 21:01:02.029210  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:01:02.029217  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:01:02.029264  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:01:02.056503  160802 cri.go:87] found id: ""
	I0728 21:01:02.056541  160802 logs.go:274] 0 containers: []
	W0728 21:01:02.056551  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:01:02.056561  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:01:02.056626  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:01:02.081798  160802 cri.go:87] found id: ""
	I0728 21:01:02.081822  160802 logs.go:274] 0 containers: []
	W0728 21:01:02.081829  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:01:02.081836  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:01:02.081894  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:01:02.108138  160802 cri.go:87] found id: ""
	I0728 21:01:02.108170  160802 logs.go:274] 0 containers: []
	W0728 21:01:02.108179  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:01:02.108186  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:01:02.108235  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:01:02.133705  160802 cri.go:87] found id: ""
	I0728 21:01:02.133736  160802 logs.go:274] 0 containers: []
	W0728 21:01:02.133747  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:01:02.133758  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:01:02.133773  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:01:02.163438  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:01:02.163469  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:01:02.211143  160802 logs.go:138] Found kubelet problem: Jul 28 21:01:01 kubernetes-upgrade-20220728205630-9812 kubelet[5302]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:01:02.262558  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:01:02.262602  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:01:02.280236  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:01:02.280278  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:01:02.342071  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:01:02.342124  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:01:02.342134  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:01:02.382351  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:01:02.382390  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 21:01:02.382497  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 21:01:02.382510  160802 out.go:239]   Jul 28 21:01:01 kubernetes-upgrade-20220728205630-9812 kubelet[5302]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 21:01:01 kubernetes-upgrade-20220728205630-9812 kubelet[5302]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:01:02.382514  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:01:02.382519  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:01:12.383837  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:01:12.419952  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:01:12.420042  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:01:12.447292  160802 cri.go:87] found id: ""
	I0728 21:01:12.447322  160802 logs.go:274] 0 containers: []
	W0728 21:01:12.447332  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:01:12.447340  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:01:12.447396  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:01:12.474514  160802 cri.go:87] found id: ""
	I0728 21:01:12.474548  160802 logs.go:274] 0 containers: []
	W0728 21:01:12.474556  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:01:12.474563  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:01:12.474630  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:01:12.501032  160802 cri.go:87] found id: ""
	I0728 21:01:12.501057  160802 logs.go:274] 0 containers: []
	W0728 21:01:12.501066  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:01:12.501076  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:01:12.501135  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:01:12.528468  160802 cri.go:87] found id: ""
	I0728 21:01:12.528492  160802 logs.go:274] 0 containers: []
	W0728 21:01:12.528499  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:01:12.528506  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:01:12.528555  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:01:12.554585  160802 cri.go:87] found id: ""
	I0728 21:01:12.554619  160802 logs.go:274] 0 containers: []
	W0728 21:01:12.554628  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:01:12.554636  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:01:12.554691  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:01:12.580527  160802 cri.go:87] found id: ""
	I0728 21:01:12.580556  160802 logs.go:274] 0 containers: []
	W0728 21:01:12.580565  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:01:12.580574  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:01:12.580628  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:01:12.606243  160802 cri.go:87] found id: ""
	I0728 21:01:12.606276  160802 logs.go:274] 0 containers: []
	W0728 21:01:12.606285  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:01:12.606293  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:01:12.606340  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:01:12.633081  160802 cri.go:87] found id: ""
	I0728 21:01:12.633113  160802 logs.go:274] 0 containers: []
	W0728 21:01:12.633122  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:01:12.633137  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:01:12.633152  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:01:12.682987  160802 logs.go:138] Found kubelet problem: Jul 28 21:01:12 kubernetes-upgrade-20220728205630-9812 kubelet[5597]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:01:12.729128  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:01:12.729180  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:01:12.745294  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:01:12.745336  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:01:12.800671  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:01:12.800695  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:01:12.800707  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:01:12.839152  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:01:12.839216  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:01:12.867992  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:01:12.868811  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 21:01:12.868938  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 21:01:12.868952  160802 out.go:239]   Jul 28 21:01:12 kubernetes-upgrade-20220728205630-9812 kubelet[5597]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 21:01:12 kubernetes-upgrade-20220728205630-9812 kubelet[5597]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:01:12.868963  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:01:12.868969  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:01:22.870567  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:01:22.920615  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:01:22.920690  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:01:22.947277  160802 cri.go:87] found id: ""
	I0728 21:01:22.947302  160802 logs.go:274] 0 containers: []
	W0728 21:01:22.947308  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:01:22.947315  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:01:22.947365  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:01:22.974015  160802 cri.go:87] found id: ""
	I0728 21:01:22.974047  160802 logs.go:274] 0 containers: []
	W0728 21:01:22.974054  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:01:22.974061  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:01:22.974131  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:01:23.001666  160802 cri.go:87] found id: ""
	I0728 21:01:23.001699  160802 logs.go:274] 0 containers: []
	W0728 21:01:23.001706  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:01:23.001713  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:01:23.001761  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:01:23.027384  160802 cri.go:87] found id: ""
	I0728 21:01:23.027415  160802 logs.go:274] 0 containers: []
	W0728 21:01:23.027422  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:01:23.027428  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:01:23.027493  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:01:23.054676  160802 cri.go:87] found id: ""
	I0728 21:01:23.054705  160802 logs.go:274] 0 containers: []
	W0728 21:01:23.054723  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:01:23.054733  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:01:23.054791  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:01:23.081094  160802 cri.go:87] found id: ""
	I0728 21:01:23.081120  160802 logs.go:274] 0 containers: []
	W0728 21:01:23.081127  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:01:23.081135  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:01:23.081180  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:01:23.106469  160802 cri.go:87] found id: ""
	I0728 21:01:23.106502  160802 logs.go:274] 0 containers: []
	W0728 21:01:23.106512  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:01:23.106521  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:01:23.106583  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:01:23.133292  160802 cri.go:87] found id: ""
	I0728 21:01:23.133319  160802 logs.go:274] 0 containers: []
	W0728 21:01:23.133328  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:01:23.133339  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:01:23.133356  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:01:23.149082  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:01:23.149122  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:01:23.205713  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:01:23.205743  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:01:23.205755  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:01:23.247400  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:01:23.247445  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:01:23.277092  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:01:23.277121  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:01:23.323939  160802 logs.go:138] Found kubelet problem: Jul 28 21:01:22 kubernetes-upgrade-20220728205630-9812 kubelet[5894]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:01:23.369893  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:01:23.369929  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 21:01:23.370065  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 21:01:23.370086  160802 out.go:239]   Jul 28 21:01:22 kubernetes-upgrade-20220728205630-9812 kubelet[5894]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 21:01:22 kubernetes-upgrade-20220728205630-9812 kubelet[5894]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:01:23.370093  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:01:23.370102  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:01:33.370317  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:01:33.420579  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:01:33.420685  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:01:33.450569  160802 cri.go:87] found id: ""
	I0728 21:01:33.450593  160802 logs.go:274] 0 containers: []
	W0728 21:01:33.450599  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:01:33.450605  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:01:33.450652  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:01:33.481100  160802 cri.go:87] found id: ""
	I0728 21:01:33.481127  160802 logs.go:274] 0 containers: []
	W0728 21:01:33.481135  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:01:33.481143  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:01:33.481198  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:01:33.512490  160802 cri.go:87] found id: ""
	I0728 21:01:33.512523  160802 logs.go:274] 0 containers: []
	W0728 21:01:33.512532  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:01:33.512541  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:01:33.512603  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:01:33.543944  160802 cri.go:87] found id: ""
	I0728 21:01:33.543974  160802 logs.go:274] 0 containers: []
	W0728 21:01:33.543983  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:01:33.543991  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:01:33.544055  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:01:33.575014  160802 cri.go:87] found id: ""
	I0728 21:01:33.575045  160802 logs.go:274] 0 containers: []
	W0728 21:01:33.575054  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:01:33.575063  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:01:33.575125  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:01:33.603099  160802 cri.go:87] found id: ""
	I0728 21:01:33.603130  160802 logs.go:274] 0 containers: []
	W0728 21:01:33.603140  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:01:33.603149  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:01:33.603196  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:01:33.630296  160802 cri.go:87] found id: ""
	I0728 21:01:33.630325  160802 logs.go:274] 0 containers: []
	W0728 21:01:33.630332  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:01:33.630339  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:01:33.630387  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:01:33.657324  160802 cri.go:87] found id: ""
	I0728 21:01:33.657356  160802 logs.go:274] 0 containers: []
	W0728 21:01:33.657365  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:01:33.657378  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:01:33.657392  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:01:33.705000  160802 logs.go:138] Found kubelet problem: Jul 28 21:01:33 kubernetes-upgrade-20220728205630-9812 kubelet[6192]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:01:33.754620  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:01:33.754665  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:01:33.770353  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:01:33.770391  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:01:33.825700  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:01:33.825737  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:01:33.825751  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:01:33.863969  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:01:33.864013  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:01:33.893526  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:01:33.893556  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 21:01:33.893668  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 21:01:33.893681  160802 out.go:239]   Jul 28 21:01:33 kubernetes-upgrade-20220728205630-9812 kubelet[6192]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 21:01:33 kubernetes-upgrade-20220728205630-9812 kubelet[6192]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:01:33.893685  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:01:33.893690  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:01:43.895001  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:01:43.920602  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:01:43.920693  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:01:43.951291  160802 cri.go:87] found id: ""
	I0728 21:01:43.951324  160802 logs.go:274] 0 containers: []
	W0728 21:01:43.951335  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:01:43.951345  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:01:43.951406  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:01:43.982228  160802 cri.go:87] found id: ""
	I0728 21:01:43.982264  160802 logs.go:274] 0 containers: []
	W0728 21:01:43.982274  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:01:43.982284  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:01:43.982350  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:01:44.018496  160802 cri.go:87] found id: ""
	I0728 21:01:44.018528  160802 logs.go:274] 0 containers: []
	W0728 21:01:44.018538  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:01:44.018547  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:01:44.018613  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:01:44.049759  160802 cri.go:87] found id: ""
	I0728 21:01:44.049796  160802 logs.go:274] 0 containers: []
	W0728 21:01:44.049805  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:01:44.049815  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:01:44.049875  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:01:44.081950  160802 cri.go:87] found id: ""
	I0728 21:01:44.081983  160802 logs.go:274] 0 containers: []
	W0728 21:01:44.081992  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:01:44.082000  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:01:44.082063  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:01:44.108825  160802 cri.go:87] found id: ""
	I0728 21:01:44.108858  160802 logs.go:274] 0 containers: []
	W0728 21:01:44.108872  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:01:44.108881  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:01:44.108929  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:01:44.139774  160802 cri.go:87] found id: ""
	I0728 21:01:44.139798  160802 logs.go:274] 0 containers: []
	W0728 21:01:44.139804  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:01:44.139816  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:01:44.139879  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:01:44.166610  160802 cri.go:87] found id: ""
	I0728 21:01:44.166635  160802 logs.go:274] 0 containers: []
	W0728 21:01:44.166642  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:01:44.166651  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:01:44.166664  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:01:44.215147  160802 logs.go:138] Found kubelet problem: Jul 28 21:01:43 kubernetes-upgrade-20220728205630-9812 kubelet[6488]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:01:44.262161  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:01:44.262206  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:01:44.279177  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:01:44.279226  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:01:44.336817  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:01:44.336847  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:01:44.336859  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:01:44.374149  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:01:44.374193  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:01:44.403890  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:01:44.403916  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 21:01:44.404023  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 21:01:44.404037  160802 out.go:239]   Jul 28 21:01:43 kubernetes-upgrade-20220728205630-9812 kubelet[6488]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 21:01:43 kubernetes-upgrade-20220728205630-9812 kubelet[6488]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:01:44.404041  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:01:44.404046  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:01:54.405409  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:01:54.419902  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:01:54.419988  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:01:54.448684  160802 cri.go:87] found id: ""
	I0728 21:01:54.448710  160802 logs.go:274] 0 containers: []
	W0728 21:01:54.448719  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:01:54.448728  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:01:54.448794  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:01:54.477290  160802 cri.go:87] found id: ""
	I0728 21:01:54.477326  160802 logs.go:274] 0 containers: []
	W0728 21:01:54.477335  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:01:54.477343  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:01:54.477400  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:01:54.503660  160802 cri.go:87] found id: ""
	I0728 21:01:54.503689  160802 logs.go:274] 0 containers: []
	W0728 21:01:54.503698  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:01:54.503707  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:01:54.503755  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:01:54.530117  160802 cri.go:87] found id: ""
	I0728 21:01:54.530143  160802 logs.go:274] 0 containers: []
	W0728 21:01:54.530152  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:01:54.530162  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:01:54.530216  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:01:54.557645  160802 cri.go:87] found id: ""
	I0728 21:01:54.557683  160802 logs.go:274] 0 containers: []
	W0728 21:01:54.557694  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:01:54.557703  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:01:54.557766  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:01:54.584750  160802 cri.go:87] found id: ""
	I0728 21:01:54.584777  160802 logs.go:274] 0 containers: []
	W0728 21:01:54.584784  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:01:54.584790  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:01:54.584837  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:01:54.611538  160802 cri.go:87] found id: ""
	I0728 21:01:54.611567  160802 logs.go:274] 0 containers: []
	W0728 21:01:54.611574  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:01:54.611582  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:01:54.611642  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:01:54.639297  160802 cri.go:87] found id: ""
	I0728 21:01:54.639331  160802 logs.go:274] 0 containers: []
	W0728 21:01:54.639337  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:01:54.639347  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:01:54.639358  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:01:54.691418  160802 logs.go:138] Found kubelet problem: Jul 28 21:01:54 kubernetes-upgrade-20220728205630-9812 kubelet[6790]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:01:54.737146  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:01:54.737189  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:01:54.755335  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:01:54.755382  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:01:54.811257  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:01:54.811310  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:01:54.811327  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:01:54.848751  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:01:54.848804  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:01:54.880228  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:01:54.880254  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 21:01:54.880373  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 21:01:54.880385  160802 out.go:239]   Jul 28 21:01:54 kubernetes-upgrade-20220728205630-9812 kubelet[6790]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 21:01:54 kubernetes-upgrade-20220728205630-9812 kubelet[6790]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:01:54.880391  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:01:54.880413  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:02:04.882058  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:02:04.920153  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:02:04.920257  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:02:04.949005  160802 cri.go:87] found id: ""
	I0728 21:02:04.949044  160802 logs.go:274] 0 containers: []
	W0728 21:02:04.949052  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:02:04.949063  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:02:04.949126  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:02:04.977644  160802 cri.go:87] found id: ""
	I0728 21:02:04.977674  160802 logs.go:274] 0 containers: []
	W0728 21:02:04.977683  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:02:04.977690  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:02:04.977755  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:02:05.004869  160802 cri.go:87] found id: ""
	I0728 21:02:05.004900  160802 logs.go:274] 0 containers: []
	W0728 21:02:05.004910  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:02:05.004919  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:02:05.004978  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:02:05.031209  160802 cri.go:87] found id: ""
	I0728 21:02:05.031236  160802 logs.go:274] 0 containers: []
	W0728 21:02:05.031243  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:02:05.031250  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:02:05.031297  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:02:05.058559  160802 cri.go:87] found id: ""
	I0728 21:02:05.058587  160802 logs.go:274] 0 containers: []
	W0728 21:02:05.058593  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:02:05.058600  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:02:05.058665  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:02:05.087349  160802 cri.go:87] found id: ""
	I0728 21:02:05.087374  160802 logs.go:274] 0 containers: []
	W0728 21:02:05.087381  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:02:05.087389  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:02:05.087446  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:02:05.115776  160802 cri.go:87] found id: ""
	I0728 21:02:05.115801  160802 logs.go:274] 0 containers: []
	W0728 21:02:05.115807  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:02:05.115813  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:02:05.115870  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:02:05.144255  160802 cri.go:87] found id: ""
	I0728 21:02:05.144282  160802 logs.go:274] 0 containers: []
	W0728 21:02:05.144290  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:02:05.144301  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:02:05.144328  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:02:05.196560  160802 logs.go:138] Found kubelet problem: Jul 28 21:02:04 kubernetes-upgrade-20220728205630-9812 kubelet[7087]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:02:05.242778  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:02:05.242819  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:02:05.259370  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:02:05.259415  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:02:05.318270  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:02:05.318303  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:02:05.318318  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:02:05.355993  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:02:05.356036  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:02:05.385546  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:02:05.385571  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0728 21:02:05.385667  160802 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0728 21:02:05.385679  160802 out.go:239]   Jul 28 21:02:04 kubernetes-upgrade-20220728205630-9812 kubelet[7087]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Jul 28 21:02:04 kubernetes-upgrade-20220728205630-9812 kubelet[7087]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:02:05.385684  160802 out.go:309] Setting ErrFile to fd 2...
	I0728 21:02:05.385689  160802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:02:15.386914  160802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:02:15.398574  160802 kubeadm.go:630] restartCluster took 4m1.997922421s
	W0728 21:02:15.398737  160802 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0728 21:02:15.398807  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0728 21:02:16.170735  160802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 21:02:16.183498  160802 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 21:02:16.193101  160802 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 21:02:16.193186  160802 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 21:02:16.201876  160802 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 21:02:16.201929  160802 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 21:04:12.721082  160802 out.go:204]   - Generating certificates and keys ...
	I0728 21:04:12.724181  160802 out.go:204]   - Booting up control plane ...
	W0728 21:04:12.726717  160802 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0728 21:02:16.240575    7625 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0728 21:02:16.240575    7625 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0728 21:04:12.726772  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0728 21:04:13.465967  160802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 21:04:13.478145  160802 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 21:04:13.478204  160802 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 21:04:13.486471  160802 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 21:04:13.486522  160802 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 21:06:09.456081  160802 out.go:204]   - Generating certificates and keys ...
	I0728 21:06:09.460704  160802 out.go:204]   - Booting up control plane ...
	I0728 21:06:09.463187  160802 kubeadm.go:397] StartCluster complete in 7m56.100147537s
	I0728 21:06:09.463242  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:06:09.463303  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:06:09.492218  160802 cri.go:87] found id: ""
	I0728 21:06:09.492244  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.492250  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:06:09.492257  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:06:09.492327  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:06:09.519747  160802 cri.go:87] found id: ""
	I0728 21:06:09.519773  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.519779  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:06:09.519786  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:06:09.519843  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:06:09.546296  160802 cri.go:87] found id: ""
	I0728 21:06:09.546331  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.546340  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:06:09.546348  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:06:09.546505  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:06:09.574600  160802 cri.go:87] found id: ""
	I0728 21:06:09.574627  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.574634  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:06:09.574640  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:06:09.574701  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:06:09.604664  160802 cri.go:87] found id: ""
	I0728 21:06:09.604694  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.604700  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:06:09.604708  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:06:09.604798  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:06:09.634288  160802 cri.go:87] found id: ""
	I0728 21:06:09.634320  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.634329  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:06:09.634339  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:06:09.634400  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:06:09.666085  160802 cri.go:87] found id: ""
	I0728 21:06:09.666116  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.666123  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:06:09.666130  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:06:09.666186  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:06:09.697616  160802 cri.go:87] found id: ""
	I0728 21:06:09.697646  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.697656  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:06:09.697671  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:06:09.697688  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:06:09.715231  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:06:09.715278  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:06:09.774303  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:06:09.774333  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:06:09.774345  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:06:09.822586  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:06:09.822641  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:06:09.856873  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:06:09.856900  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:06:09.905506  160802 logs.go:138] Found kubelet problem: Jul 28 21:06:09 kubernetes-upgrade-20220728205630-9812 kubelet[11608]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	W0728 21:06:09.966501  160802 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0728 21:04:13.523562    9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0728 21:06:09.966572  160802 out.go:239] * 
	* 
	W0728 21:06:09.966810  160802 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0728 21:04:13.523562    9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0728 21:04:13.523562    9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 21:06:09.966848  160802 out.go:239] * 
	* 
	W0728 21:06:09.967728  160802 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 21:06:09.971858  160802 out.go:177] X Problems detected in kubelet:
	I0728 21:06:09.973883  160802 out.go:177]   Jul 28 21:06:09 kubernetes-upgrade-20220728205630-9812 kubelet[11608]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:06:09.978588  160802 out.go:177] 
	W0728 21:06:09.981692  160802 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0728 21:04:13.523562    9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0728 21:04:13.523562    9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 21:06:09.981884  160802 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0728 21:06:09.981956  160802 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0728 21:06:09.986376  160802 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220728205630-9812 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220728205630-9812 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220728205630-9812 version --output=json: exit status 1 (58.834661ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "24",
	    "gitVersion": "v1.24.3",
	    "gitCommit": "aef86a93758dc3cb2c658dd9657ab4ad4afc21cb",
	    "gitTreeState": "clean",
	    "buildDate": "2022-07-13T14:30:46Z",
	    "goVersion": "go1.18.3",
	    "compiler": "gc",
	    "platform": "linux/amd64"
	  },
	  "kustomizeVersion": "v4.5.4"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.67.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:482: *** TestKubernetesUpgrade FAILED at 2022-07-28 21:06:10.227428285 +0000 UTC m=+2357.292310965
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220728205630-9812
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220728205630-9812:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "157d91e6166012921e611704b4a2055b61d55a4321adf28c987469c33622c1b6",
	        "Created": "2022-07-28T20:56:44.053513528Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 161192,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T20:57:23.967962968Z",
	            "FinishedAt": "2022-07-28T20:57:22.081850941Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/157d91e6166012921e611704b4a2055b61d55a4321adf28c987469c33622c1b6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/157d91e6166012921e611704b4a2055b61d55a4321adf28c987469c33622c1b6/hostname",
	        "HostsPath": "/var/lib/docker/containers/157d91e6166012921e611704b4a2055b61d55a4321adf28c987469c33622c1b6/hosts",
	        "LogPath": "/var/lib/docker/containers/157d91e6166012921e611704b4a2055b61d55a4321adf28c987469c33622c1b6/157d91e6166012921e611704b4a2055b61d55a4321adf28c987469c33622c1b6-json.log",
	        "Name": "/kubernetes-upgrade-20220728205630-9812",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-20220728205630-9812:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220728205630-9812",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/98cd9b4af8297a0f79b7836844d484361b0e513a818a7af9acc180c4cff6a59f-init/diff:/var/lib/docker/overlay2/159b55a9ed0c6f628a057cdb04dda02bba30a3b641518455957c4aad71210e5b/diff:/var/lib/docker/overlay2/10f339ea6c6d2bc20e1c3984c10e314bdfacbf16c4f1fc81508a8af53618e0c2/diff:/var/lib/docker/overlay2/f70c8501f8e2ab7eb8cf3713b8965df8ff0eabb54c03470a2ca63b07e9f8aa54/diff:/var/lib/docker/overlay2/c612b2534e2fc8952f8d55d6769698d0ad07b4f70868569fe73c07e709eb41c4/diff:/var/lib/docker/overlay2/b6082008a01e842766036ffbc69caf78f0bf4a848cca7f47ab699d89da8d1da0/diff:/var/lib/docker/overlay2/d75e86c9871d888af33c32588e829032e7d8df43d915295856b1bd632a8aec40/diff:/var/lib/docker/overlay2/c8146bc91d30e444bdb037c8984a79eba689a0e9f4c6ee8a1a2f087ead11cdee/diff:/var/lib/docker/overlay2/43aff643ab52dd0f1901cc30e46fade6289c38619ec98e77b3c0b9ae3b5ceff6/diff:/var/lib/docker/overlay2/4e0dd980aab46effffa7b2d0a19ff6a1d9de94c97cca5150e7245a47dd82d395/diff:/var/lib/docker/overlay2/a54660
4a6462e894bf570401a57ff4d82983194620ba59e926b7aae262e19e3b/diff:/var/lib/docker/overlay2/94097ff7af073076335b40eb2a01a53ac0fc19c248baf8daa67146b23da2fd7d/diff:/var/lib/docker/overlay2/2e81cf4170d8d47655e7f008e7f79c639ca56a4fa6b48b83eada8753144434f0/diff:/var/lib/docker/overlay2/ad12959134154f9796289bec856cc02b54fc2c9b3de5bfa4626bde685b62714a/diff:/var/lib/docker/overlay2/4d208f59b6ce5776ae3295e11d01acfa1908bbdff6cb9fda882ea8995aee2cb0/diff:/var/lib/docker/overlay2/f9e0539a853b02e93c2741a252361d5b6cc4ecc7e2098a1f9f6f8f06ef8af675/diff:/var/lib/docker/overlay2/9aafc677aea247a7aa7f21124a1d04e79e334bba6950604f0d9c56330f782239/diff:/var/lib/docker/overlay2/0d5358399b9dcb842d3e9f481695ffca49e2ead49bbd6f11c30e71a845833876/diff:/var/lib/docker/overlay2/6443a29a568b98bbf23e8cbe82a92ae50c3e69955de693c5ec049c84c83c2578/diff:/var/lib/docker/overlay2/930c7197f21b625ccb4c9330154d41beedf2f4dbce826a605e0b175dc9db6fb4/diff:/var/lib/docker/overlay2/e467ccfb37dceaab87301a1bc1fc8424d242e4fd901cde56b24c07668d1d47d1/diff:/var/lib/d
ocker/overlay2/9d64a9063c318a598d6f650409543bb20699d834b19aca837a0fd2e4785de7a7/diff:/var/lib/docker/overlay2/730748e1888d7eda2cf74610d227e0d7a5e969a95d87795513ae2b65d4bf0d37/diff:/var/lib/docker/overlay2/d15656d583a4f3a64cd65d8b266888d55da8b978e95cd0dedb81984b17547a8e/diff:/var/lib/docker/overlay2/1075d687c8048b9e07d64f9229e6c6fe189eb1d89e59fbc320f6be7f29f3dcf3/diff:/var/lib/docker/overlay2/70d6a8817e1e919d589fd69f67161bb4dac16836849b3b35b26cf48214f62cf6/diff:/var/lib/docker/overlay2/8e8e13f68b04eaae4a9c67194a27c687ed31816a5aa7bbe43aefb7885ab49cfd/diff:/var/lib/docker/overlay2/a5d29889159bc71a7e53f3275846c3a205f4afbf8707facf2cf88163af181ea6/diff:/var/lib/docker/overlay2/18cb8da85b40492f06576cb149164681c9b88cc6d83a7b73074f93afd2d326d0/diff:/var/lib/docker/overlay2/cd1ceb3894d2dc694ca5f4d57fb937f12a471c4861e72eb758c8d99ae15ec8e7/diff:/var/lib/docker/overlay2/e192c90b77e5017fb4d32a36c6118403b5ce78981718b9ae597795a57dd8967e/diff:/var/lib/docker/overlay2/cb277b4bd13e414771896c6e520c18a1de8c252ad1c2dc11f05a8b57018
bdf08/diff:/var/lib/docker/overlay2/e8f50e7d98e92ecd9a2465d95ce41953a7acd8958f4a599e837bb9bbbfaa72dc/diff:/var/lib/docker/overlay2/7f8089a9db64a7a0b1637dd394b4f2e4b9886ab7478b5972c3d3b8addec08c69/diff:/var/lib/docker/overlay2/6e552fc578751df4db559a20753abdb8d0bb057b992f1c6034a84a0a63e169ae/diff:/var/lib/docker/overlay2/0634299a7052fc637709266585f1982b3bf26fcef8a0fbb11fa9b1d17b578e35/diff:/var/lib/docker/overlay2/07b5dc86d77874519d1e86517fc1c8cdc6809da1a5ceaa0283ed6bc573ecc0ba/diff:/var/lib/docker/overlay2/06f82e7047fa2ecb3d75b421c09c07633f4324121e1f8e4158cf97e9172f97a9/diff:/var/lib/docker/overlay2/33882ff0de530162078f03ec586455ae28e3d9e957265ccf6de389ab70269be4/diff:/var/lib/docker/overlay2/13a232f6a032f7a2122ecb4c4954c1d7427d99358c129109496d92edac19aa4d/diff:/var/lib/docker/overlay2/3f28515f67d2fb23a544c48310684f66ccd4a2d4894b75858e9750adb53d7d1f/diff:/var/lib/docker/overlay2/a58c936c6b19de4261612f61347279308363f3918c1b7585e4c8425e69c6e89f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98cd9b4af8297a0f79b7836844d484361b0e513a818a7af9acc180c4cff6a59f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98cd9b4af8297a0f79b7836844d484361b0e513a818a7af9acc180c4cff6a59f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98cd9b4af8297a0f79b7836844d484361b0e513a818a7af9acc180c4cff6a59f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220728205630-9812",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220728205630-9812/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220728205630-9812",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220728205630-9812",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220728205630-9812",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "638fa93f7d0ca5187a1fc034140628cf07afb9be6a1c298481d12962389ccb3f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49337"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49336"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49333"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49335"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49334"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/638fa93f7d0c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220728205630-9812": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "157d91e61660",
	                        "kubernetes-upgrade-20220728205630-9812"
	                    ],
	                    "NetworkID": "c898e4ca6805eab63bab8736fbb6bce03c0f9e3a222a941d6daa6694d9e2e9ad",
	                    "EndpointID": "30990745a4063b9672230f312ad87259d4a2e5fb4469d0935ae4ece8090a27a5",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220728205630-9812 -n kubernetes-upgrade-20220728205630-9812
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220728205630-9812 -n kubernetes-upgrade-20220728205630-9812: exit status 2 (499.711235ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220728205630-9812 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-20220728205630-9812 logs -n 25: (1.087179183s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|---------------------|---------------------|
	| profile | list --output json                                | minikube                                       | jenkins | v1.26.0 | 28 Jul 22 20:58 UTC | 28 Jul 22 20:58 UTC |
	| delete  | -p pause-20220728205731-9812                      | pause-20220728205731-9812                      | jenkins | v1.26.0 | 28 Jul 22 20:58 UTC | 28 Jul 22 20:59 UTC |
	| start   | -p                                                | force-systemd-flag-20220728205900-9812         | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 20:59 UTC |
	|         | force-systemd-flag-20220728205900-9812            |                                                |         |         |                     |                     |
	|         | --memory=2048 --force-systemd                     |                                                |         |         |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker            |                                                |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |                                                |         |         |                     |                     |
	| ssh     | cert-options-20220728205835-9812                  | cert-options-20220728205835-9812               | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 20:59 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                                |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                                |         |         |                     |                     |
	| ssh     | -p                                                | cert-options-20220728205835-9812               | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 20:59 UTC |
	|         | cert-options-20220728205835-9812                  |                                                |         |         |                     |                     |
	|         | -- sudo cat                                       |                                                |         |         |                     |                     |
	|         | /etc/kubernetes/admin.conf                        |                                                |         |         |                     |                     |
	| delete  | -p                                                | cert-options-20220728205835-9812               | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 20:59 UTC |
	|         | cert-options-20220728205835-9812                  |                                                |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220728205919-9812            | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 21:01 UTC |
	|         | old-k8s-version-20220728205919-9812               |                                                |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                |         |         |                     |                     |
	|         | --keep-context=false                              |                                                |         |         |                     |                     |
	|         | --driver=docker                                   |                                                |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                                |         |         |                     |                     |
	| ssh     | force-systemd-flag-20220728205900-9812            | force-systemd-flag-20220728205900-9812         | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 20:59 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                                |         |         |                     |                     |
	| delete  | -p                                                | force-systemd-flag-20220728205900-9812         | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 20:59 UTC |
	|         | force-systemd-flag-20220728205900-9812            |                                                |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220728205940-9812                 | jenkins | v1.26.0 | 28 Jul 22 20:59 UTC | 28 Jul 22 21:00 UTC |
	|         | no-preload-20220728205940-9812                    |                                                |         |         |                     |                     |
	|         | --memory=2200                                     |                                                |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |         |                     |                     |
	|         | --driver=docker                                   |                                                |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220728205940-9812                 | jenkins | v1.26.0 | 28 Jul 22 21:00 UTC | 28 Jul 22 21:00 UTC |
	|         | no-preload-20220728205940-9812                    |                                                |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220728205940-9812                 | jenkins | v1.26.0 | 28 Jul 22 21:00 UTC | 28 Jul 22 21:01 UTC |
	|         | no-preload-20220728205940-9812                    |                                                |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220728205940-9812                 | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC | 28 Jul 22 21:01 UTC |
	|         | no-preload-20220728205940-9812                    |                                                |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220728205940-9812                 | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC |                     |
	|         | no-preload-20220728205940-9812                    |                                                |         |         |                     |                     |
	|         | --memory=2200                                     |                                                |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                                |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                |         |         |                     |                     |
	|         | --driver=docker                                   |                                                |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220728205919-9812            | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC | 28 Jul 22 21:01 UTC |
	|         | old-k8s-version-20220728205919-9812               |                                                |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220728205919-9812            | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC | 28 Jul 22 21:01 UTC |
	|         | old-k8s-version-20220728205919-9812               |                                                |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                     |                     |
	| start   | -p                                                | cert-expiration-20220728205827-9812            | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC | 28 Jul 22 21:02 UTC |
	|         | cert-expiration-20220728205827-9812               |                                                |         |         |                     |                     |
	|         | --memory=2048                                     |                                                |         |         |                     |                     |
	|         | --cert-expiration=8760h                           |                                                |         |         |                     |                     |
	|         | --driver=docker                                   |                                                |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220728205919-9812            | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC | 28 Jul 22 21:01 UTC |
	|         | old-k8s-version-20220728205919-9812               |                                                |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220728205919-9812            | jenkins | v1.26.0 | 28 Jul 22 21:01 UTC |                     |
	|         | old-k8s-version-20220728205919-9812               |                                                |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                |         |         |                     |                     |
	|         | --keep-context=false                              |                                                |         |         |                     |                     |
	|         | --driver=docker                                   |                                                |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                                |         |         |                     |                     |
	| delete  | -p                                                | cert-expiration-20220728205827-9812            | jenkins | v1.26.0 | 28 Jul 22 21:02 UTC | 28 Jul 22 21:02 UTC |
	|         | cert-expiration-20220728205827-9812               |                                                |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220728210213-9812 | jenkins | v1.26.0 | 28 Jul 22 21:02 UTC | 28 Jul 22 21:03 UTC |
	|         | default-k8s-different-port-20220728210213-9812    |                                                |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |                                                |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220728210213-9812 | jenkins | v1.26.0 | 28 Jul 22 21:03 UTC | 28 Jul 22 21:03 UTC |
	|         | default-k8s-different-port-20220728210213-9812    |                                                |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220728210213-9812 | jenkins | v1.26.0 | 28 Jul 22 21:03 UTC | 28 Jul 22 21:03 UTC |
	|         | default-k8s-different-port-20220728210213-9812    |                                                |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220728210213-9812 | jenkins | v1.26.0 | 28 Jul 22 21:03 UTC | 28 Jul 22 21:03 UTC |
	|         | default-k8s-different-port-20220728210213-9812    |                                                |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220728210213-9812 | jenkins | v1.26.0 | 28 Jul 22 21:03 UTC |                     |
	|         | default-k8s-different-port-20220728210213-9812    |                                                |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |                                                |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 21:03:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 21:03:42.611768  212382 out.go:296] Setting OutFile to fd 1 ...
	I0728 21:03:42.611935  212382 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:03:42.611945  212382 out.go:309] Setting ErrFile to fd 2...
	I0728 21:03:42.611957  212382 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:03:42.612121  212382 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 21:03:42.612829  212382 out.go:303] Setting JSON to false
	I0728 21:03:42.614911  212382 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2773,"bootTime":1659039450,"procs":949,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0728 21:03:42.615000  212382 start.go:125] virtualization: kvm guest
	I0728 21:03:42.617804  212382 out.go:177] * [default-k8s-different-port-20220728210213-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0728 21:03:42.619408  212382 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 21:03:42.619334  212382 notify.go:193] Checking for updates...
	I0728 21:03:42.622212  212382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 21:03:42.624137  212382 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 21:03:42.625777  212382 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 21:03:42.627238  212382 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0728 21:03:42.629353  212382 config.go:178] Loaded profile config "default-k8s-different-port-20220728210213-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 21:03:42.629909  212382 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 21:03:42.681814  212382 docker.go:137] docker version: linux-20.10.17
	I0728 21:03:42.681924  212382 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 21:03:42.801064  212382 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-07-28 21:03:42.714639784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 21:03:42.801197  212382 docker.go:254] overlay module found
	I0728 21:03:42.803527  212382 out.go:177] * Using the docker driver based on existing profile
	I0728 21:03:39.133295  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:41.133967  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:42.804924  212382 start.go:284] selected driver: docker
	I0728 21:03:42.804944  212382 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220728210213-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-
20220728210213-9812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 21:03:42.805125  212382 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 21:03:42.806371  212382 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 21:03:42.927884  212382 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-07-28 21:03:42.841781106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 21:03:42.928205  212382 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 21:03:42.928263  212382 cni.go:95] Creating CNI manager for ""
	I0728 21:03:42.928280  212382 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0728 21:03:42.928309  212382 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220728210213-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728210213-9812 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddr
ess: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 21:03:42.930940  212382 out.go:177] * Starting control plane node default-k8s-different-port-20220728210213-9812 in cluster default-k8s-different-port-20220728210213-9812
	I0728 21:03:42.932288  212382 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0728 21:03:42.933583  212382 out.go:177] * Pulling base image ...
	I0728 21:03:42.934816  212382 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0728 21:03:42.934907  212382 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4
	I0728 21:03:42.934927  212382 cache.go:57] Caching tarball of preloaded images
	I0728 21:03:42.934935  212382 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 21:03:42.935187  212382 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 21:03:42.935203  212382 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on containerd
	I0728 21:03:42.935390  212382 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/config.json ...
	I0728 21:03:42.974508  212382 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 21:03:42.974540  212382 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 21:03:42.974555  212382 cache.go:208] Successfully downloaded all kic artifacts
	I0728 21:03:42.974615  212382 start.go:370] acquiring machines lock for default-k8s-different-port-20220728210213-9812: {Name:mkab6f862bec008fcda0a5dd067bb9f92e1c3d5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 21:03:42.974723  212382 start.go:374] acquired machines lock for "default-k8s-different-port-20220728210213-9812" in 83.295µs
	I0728 21:03:42.974746  212382 start.go:95] Skipping create...Using existing machine configuration
	I0728 21:03:42.974756  212382 fix.go:55] fixHost starting: 
	I0728 21:03:42.975078  212382 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728210213-9812 --format={{.State.Status}}
	I0728 21:03:43.011861  212382 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220728210213-9812: state=Stopped err=<nil>
	W0728 21:03:43.011896  212382 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 21:03:43.014312  212382 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220728210213-9812" ...
	I0728 21:03:42.335465  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:44.834531  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:43.015733  212382 cli_runner.go:164] Run: docker start default-k8s-different-port-20220728210213-9812
	I0728 21:03:43.449285  212382 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728210213-9812 --format={{.State.Status}}
	I0728 21:03:43.490379  212382 kic.go:415] container "default-k8s-different-port-20220728210213-9812" state is running.
	I0728 21:03:43.490927  212382 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728210213-9812
	I0728 21:03:43.529100  212382 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/config.json ...
	I0728 21:03:43.529436  212382 machine.go:88] provisioning docker machine ...
	I0728 21:03:43.529479  212382 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220728210213-9812"
	I0728 21:03:43.529539  212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
	I0728 21:03:43.570669  212382 main.go:134] libmachine: Using SSH client type: native
	I0728 21:03:43.570942  212382 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil>  [] 0s} 127.0.0.1 49392 <nil> <nil>}
	I0728 21:03:43.570973  212382 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220728210213-9812 && echo "default-k8s-different-port-20220728210213-9812" | sudo tee /etc/hostname
	I0728 21:03:43.571703  212382 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51926->127.0.0.1:49392: read: connection reset by peer
	I0728 21:03:46.705061  212382 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220728210213-9812
	
	I0728 21:03:46.705153  212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
	I0728 21:03:46.743391  212382 main.go:134] libmachine: Using SSH client type: native
	I0728 21:03:46.743545  212382 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil>  [] 0s} 127.0.0.1 49392 <nil> <nil>}
	I0728 21:03:46.743567  212382 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220728210213-9812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220728210213-9812/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220728210213-9812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 21:03:46.867081  212382 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 21:03:46.867123  212382 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 21:03:46.867147  212382 ubuntu.go:177] setting up certificates
	I0728 21:03:46.867157  212382 provision.go:83] configureAuth start
	I0728 21:03:46.867212  212382 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728210213-9812
	I0728 21:03:46.903997  212382 provision.go:138] copyHostCerts
	I0728 21:03:46.904072  212382 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 21:03:46.904085  212382 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 21:03:46.904170  212382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1078 bytes)
	I0728 21:03:46.904301  212382 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 21:03:46.904324  212382 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 21:03:46.904359  212382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 21:03:46.904452  212382 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 21:03:46.904466  212382 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 21:03:46.904504  212382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 21:03:46.904631  212382 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220728210213-9812 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220728210213-9812]
	I0728 21:03:47.010831  212382 provision.go:172] copyRemoteCerts
	I0728 21:03:47.010939  212382 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 21:03:47.010989  212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
	I0728 21:03:47.049207  212382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728210213-9812/id_rsa Username:docker}
	I0728 21:03:47.143798  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 21:03:47.163910  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0728 21:03:47.184336  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 21:03:47.206560  212382 provision.go:86] duration metric: configureAuth took 339.388807ms
	I0728 21:03:47.206588  212382 ubuntu.go:193] setting minikube options for container-runtime
	I0728 21:03:47.206755  212382 config.go:178] Loaded profile config "default-k8s-different-port-20220728210213-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 21:03:47.206766  212382 machine.go:91] provisioned docker machine in 3.67730656s
	I0728 21:03:47.206773  212382 start.go:307] post-start starting for "default-k8s-different-port-20220728210213-9812" (driver="docker")
	I0728 21:03:47.206780  212382 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 21:03:47.206816  212382 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 21:03:47.206855  212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
	I0728 21:03:47.245535  212382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728210213-9812/id_rsa Username:docker}
	I0728 21:03:47.336155  212382 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 21:03:47.339133  212382 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 21:03:47.339159  212382 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 21:03:47.339168  212382 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 21:03:47.339173  212382 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 21:03:47.339182  212382 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 21:03:47.339233  212382 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 21:03:47.339313  212382 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem -> 98122.pem in /etc/ssl/certs
	I0728 21:03:47.339405  212382 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 21:03:47.347120  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem --> /etc/ssl/certs/98122.pem (1708 bytes)
	I0728 21:03:47.367679  212382 start.go:310] post-start completed in 160.892278ms
	I0728 21:03:47.367781  212382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 21:03:47.367819  212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
	I0728 21:03:47.406159  212382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728210213-9812/id_rsa Username:docker}
	I0728 21:03:47.491717  212382 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 21:03:47.496208  212382 fix.go:57] fixHost completed within 4.521444392s
	I0728 21:03:47.496245  212382 start.go:82] releasing machines lock for "default-k8s-different-port-20220728210213-9812", held for 4.521508456s
	I0728 21:03:47.496338  212382 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728210213-9812
	I0728 21:03:47.533202  212382 ssh_runner.go:195] Run: systemctl --version
	I0728 21:03:47.533258  212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
	I0728 21:03:47.533261  212382 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 21:03:47.533318  212382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728210213-9812
	I0728 21:03:47.573312  212382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728210213-9812/id_rsa Username:docker}
	I0728 21:03:47.573528  212382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728210213-9812/id_rsa Username:docker}
	I0728 21:03:43.633079  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:46.133073  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:46.835006  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:48.835071  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:47.659568  212382 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 21:03:47.685550  212382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 21:03:47.696446  212382 docker.go:188] disabling docker service ...
	I0728 21:03:47.696504  212382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0728 21:03:47.707699  212382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0728 21:03:47.718214  212382 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0728 21:03:47.800400  212382 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0728 21:03:47.892734  212382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0728 21:03:47.903649  212382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 21:03:47.918235  212382 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0728 21:03:47.927669  212382 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0728 21:03:47.936838  212382 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0728 21:03:47.946170  212382 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0728 21:03:47.955346  212382 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0728 21:03:47.965255  212382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0728 21:03:47.980845  212382 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 21:03:47.987996  212382 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 21:03:47.995582  212382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 21:03:48.074160  212382 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 21:03:48.153721  212382 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0728 21:03:48.153795  212382 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0728 21:03:48.158151  212382 start.go:471] Will wait 60s for crictl version
	I0728 21:03:48.158219  212382 ssh_runner.go:195] Run: sudo crictl version
	I0728 21:03:48.189281  212382 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-28T21:03:48Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0728 21:03:48.133790  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:50.632719  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:52.633021  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:51.333960  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:53.335057  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:54.633129  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:57.133148  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:59.236557  212382 ssh_runner.go:195] Run: sudo crictl version
	I0728 21:03:59.262470  212382 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0728 21:03:59.262532  212382 ssh_runner.go:195] Run: containerd --version
	I0728 21:03:59.295607  212382 ssh_runner.go:195] Run: containerd --version
	I0728 21:03:59.330442  212382 out.go:177] * Preparing Kubernetes v1.24.3 on containerd 1.6.6 ...
	I0728 21:03:55.835459  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:58.334671  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:00.335213  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:03:59.331883  212382 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220728210213-9812 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0728 21:03:59.368865  212382 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0728 21:03:59.372761  212382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 21:03:59.384309  212382 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0728 21:03:59.384383  212382 ssh_runner.go:195] Run: sudo crictl images --output json
	I0728 21:03:59.409692  212382 containerd.go:547] all images are preloaded for containerd runtime.
	I0728 21:03:59.409717  212382 containerd.go:461] Images already preloaded, skipping extraction
	I0728 21:03:59.409759  212382 ssh_runner.go:195] Run: sudo crictl images --output json
	I0728 21:03:59.436212  212382 containerd.go:547] all images are preloaded for containerd runtime.
	I0728 21:03:59.436239  212382 cache_images.go:84] Images are preloaded, skipping loading
	I0728 21:03:59.436284  212382 ssh_runner.go:195] Run: sudo crictl info
	I0728 21:03:59.462646  212382 cni.go:95] Creating CNI manager for ""
	I0728 21:03:59.462670  212382 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0728 21:03:59.462683  212382 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 21:03:59.462696  212382 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220728210213-9812 NodeName:default-k8s-different-port-20220728210213-9812 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.94.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 21:03:59.462839  212382 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220728210213-9812"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 21:03:59.462995  212382 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220728210213-9812 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728210213-9812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0728 21:03:59.463055  212382 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 21:03:59.471156  212382 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 21:03:59.471240  212382 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 21:03:59.479407  212382 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (539 bytes)
	I0728 21:03:59.495697  212382 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 21:03:59.510436  212382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0728 21:03:59.526781  212382 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0728 21:03:59.530245  212382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 21:03:59.541280  212382 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812 for IP: 192.168.94.2
	I0728 21:03:59.541427  212382 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 21:03:59.541480  212382 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 21:03:59.541575  212382 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.key
	I0728 21:03:59.541651  212382 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/apiserver.key.ad8e880a
	I0728 21:03:59.541754  212382 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/proxy-client.key
	I0728 21:03:59.541911  212382 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812.pem (1338 bytes)
	W0728 21:03:59.541952  212382 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812_empty.pem, impossibly tiny 0 bytes
	I0728 21:03:59.541968  212382 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 21:03:59.542007  212382 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1078 bytes)
	I0728 21:03:59.542043  212382 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 21:03:59.542078  212382 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 21:03:59.542137  212382 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem (1708 bytes)
	I0728 21:03:59.542961  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 21:03:59.563286  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 21:03:59.583315  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 21:03:59.602945  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 21:03:59.623585  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 21:03:59.643769  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0728 21:03:59.662657  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 21:03:59.682322  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0728 21:03:59.702284  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 21:03:59.723244  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812.pem --> /usr/share/ca-certificates/9812.pem (1338 bytes)
	I0728 21:03:59.743668  212382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem --> /usr/share/ca-certificates/98122.pem (1708 bytes)
	I0728 21:03:59.763724  212382 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 21:03:59.777895  212382 ssh_runner.go:195] Run: openssl version
	I0728 21:03:59.783066  212382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 21:03:59.791136  212382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 21:03:59.794556  212382 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0728 21:03:59.794611  212382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 21:03:59.799617  212382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 21:03:59.807298  212382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9812.pem && ln -fs /usr/share/ca-certificates/9812.pem /etc/ssl/certs/9812.pem"
	I0728 21:03:59.815443  212382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9812.pem
	I0728 21:03:59.818877  212382 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 20:32 /usr/share/ca-certificates/9812.pem
	I0728 21:03:59.818957  212382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9812.pem
	I0728 21:03:59.824232  212382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9812.pem /etc/ssl/certs/51391683.0"
	I0728 21:03:59.832100  212382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98122.pem && ln -fs /usr/share/ca-certificates/98122.pem /etc/ssl/certs/98122.pem"
	I0728 21:03:59.841348  212382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98122.pem
	I0728 21:03:59.845050  212382 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 20:32 /usr/share/ca-certificates/98122.pem
	I0728 21:03:59.845122  212382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98122.pem
	I0728 21:03:59.851083  212382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98122.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 21:03:59.860690  212382 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220728210213-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728210213-9812
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHos
tTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 21:03:59.860784  212382 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0728 21:03:59.860847  212382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0728 21:03:59.893733  212382 cri.go:87] found id: "5cde08d4597bd6c238c52c9d92fe284f216bd62213d6e0c0af4da3d0c85b04b8"
	I0728 21:03:59.893757  212382 cri.go:87] found id: "c0b088d5f22000d9be6084afdd994f2b30c3f360f17af279d970c242fa7ca717"
	I0728 21:03:59.893764  212382 cri.go:87] found id: "206a4d565b5c989cba7278cf62eb682f4f7c9e443193ed341529d28c227d8233"
	I0728 21:03:59.893770  212382 cri.go:87] found id: "e99d5885ff0d5731ae75d1c4f98676444333644b297265274c543149f5f405fe"
	I0728 21:03:59.893776  212382 cri.go:87] found id: "03d739d3e791e9162146c113f46803dbdc91c5a9a8fc5dbfa6dcb7cdf445a582"
	I0728 21:03:59.893783  212382 cri.go:87] found id: "62e6d6b65369040ed83b9bbd2d52614e4b713fab09a11fed7c8264519c265d76"
	I0728 21:03:59.893792  212382 cri.go:87] found id: "91e011ac52b92ea19bb5ecc55d053d13f0b0cf10122b85840f8aa3dbb0d24117"
	I0728 21:03:59.893802  212382 cri.go:87] found id: "3ca3bbfc03210c6938d97a31d885271b16ad54b16c7f1aa3c667c7b0c0bfd47a"
	I0728 21:03:59.893815  212382 cri.go:87] found id: ""
	I0728 21:03:59.893867  212382 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0728 21:03:59.908077  212382 cri.go:114] JSON = null
	W0728 21:03:59.908139  212382 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0728 21:03:59.908212  212382 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 21:03:59.916220  212382 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 21:03:59.916249  212382 kubeadm.go:626] restartCluster start
	I0728 21:03:59.916348  212382 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 21:03:59.924167  212382 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:03:59.925126  212382 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220728210213-9812" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 21:03:59.925633  212382 kubeconfig.go:127] "default-k8s-different-port-20220728210213-9812" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 21:03:59.926260  212382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mka3434310bc9890bf6f7ac8ad0a69157716fb18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 21:03:59.927781  212382 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 21:03:59.935615  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:03:59.935671  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:03:59.944781  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:00.145205  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:00.145315  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:00.154611  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:00.345910  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:00.345982  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:00.355810  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:00.545029  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:00.545122  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:00.554709  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:00.744946  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:00.745044  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:00.754534  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:00.945823  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:00.945918  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:00.955127  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:01.145462  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:01.145566  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:01.155545  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:01.345902  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:01.346003  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:01.357675  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:01.544976  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:01.545080  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:01.554846  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:01.745110  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:01.745212  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:01.754506  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:01.945867  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:01.945969  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:01.955667  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:02.144884  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:02.144963  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:02.154769  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:02.344966  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:02.345058  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:02.354691  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:02.544990  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:02.545071  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:02.555012  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:03:59.133870  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:01.633001  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:02.835388  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:05.336265  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:02.745422  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:02.745501  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:02.755276  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:02.945512  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:02.945600  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:02.955029  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:02.955061  212382 api_server.go:165] Checking apiserver status ...
	I0728 21:04:02.955108  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 21:04:02.964093  212382 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:02.964122  212382 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 21:04:02.964130  212382 kubeadm.go:1092] stopping kube-system containers ...
	I0728 21:04:02.964145  212382 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0728 21:04:02.964204  212382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0728 21:04:02.990958  212382 cri.go:87] found id: "5cde08d4597bd6c238c52c9d92fe284f216bd62213d6e0c0af4da3d0c85b04b8"
	I0728 21:04:02.990990  212382 cri.go:87] found id: "c0b088d5f22000d9be6084afdd994f2b30c3f360f17af279d970c242fa7ca717"
	I0728 21:04:02.991001  212382 cri.go:87] found id: "206a4d565b5c989cba7278cf62eb682f4f7c9e443193ed341529d28c227d8233"
	I0728 21:04:02.991010  212382 cri.go:87] found id: "e99d5885ff0d5731ae75d1c4f98676444333644b297265274c543149f5f405fe"
	I0728 21:04:02.991017  212382 cri.go:87] found id: "03d739d3e791e9162146c113f46803dbdc91c5a9a8fc5dbfa6dcb7cdf445a582"
	I0728 21:04:02.991027  212382 cri.go:87] found id: "62e6d6b65369040ed83b9bbd2d52614e4b713fab09a11fed7c8264519c265d76"
	I0728 21:04:02.991037  212382 cri.go:87] found id: "91e011ac52b92ea19bb5ecc55d053d13f0b0cf10122b85840f8aa3dbb0d24117"
	I0728 21:04:02.991053  212382 cri.go:87] found id: "3ca3bbfc03210c6938d97a31d885271b16ad54b16c7f1aa3c667c7b0c0bfd47a"
	I0728 21:04:02.991068  212382 cri.go:87] found id: ""
	I0728 21:04:02.991077  212382 cri.go:232] Stopping containers: [5cde08d4597bd6c238c52c9d92fe284f216bd62213d6e0c0af4da3d0c85b04b8 c0b088d5f22000d9be6084afdd994f2b30c3f360f17af279d970c242fa7ca717 206a4d565b5c989cba7278cf62eb682f4f7c9e443193ed341529d28c227d8233 e99d5885ff0d5731ae75d1c4f98676444333644b297265274c543149f5f405fe 03d739d3e791e9162146c113f46803dbdc91c5a9a8fc5dbfa6dcb7cdf445a582 62e6d6b65369040ed83b9bbd2d52614e4b713fab09a11fed7c8264519c265d76 91e011ac52b92ea19bb5ecc55d053d13f0b0cf10122b85840f8aa3dbb0d24117 3ca3bbfc03210c6938d97a31d885271b16ad54b16c7f1aa3c667c7b0c0bfd47a]
	I0728 21:04:02.991130  212382 ssh_runner.go:195] Run: which crictl
	I0728 21:04:02.994686  212382 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 5cde08d4597bd6c238c52c9d92fe284f216bd62213d6e0c0af4da3d0c85b04b8 c0b088d5f22000d9be6084afdd994f2b30c3f360f17af279d970c242fa7ca717 206a4d565b5c989cba7278cf62eb682f4f7c9e443193ed341529d28c227d8233 e99d5885ff0d5731ae75d1c4f98676444333644b297265274c543149f5f405fe 03d739d3e791e9162146c113f46803dbdc91c5a9a8fc5dbfa6dcb7cdf445a582 62e6d6b65369040ed83b9bbd2d52614e4b713fab09a11fed7c8264519c265d76 91e011ac52b92ea19bb5ecc55d053d13f0b0cf10122b85840f8aa3dbb0d24117 3ca3bbfc03210c6938d97a31d885271b16ad54b16c7f1aa3c667c7b0c0bfd47a
	I0728 21:04:03.023593  212382 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 21:04:03.035211  212382 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 21:04:03.044471  212382 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 28 21:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 28 21:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2123 Jul 28 21:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 28 21:02 /etc/kubernetes/scheduler.conf
	
	I0728 21:04:03.044536  212382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0728 21:04:03.053096  212382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0728 21:04:03.061232  212382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0728 21:04:03.069361  212382 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:03.069428  212382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 21:04:03.077902  212382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0728 21:04:03.086062  212382 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 21:04:03.086130  212382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 21:04:03.094051  212382 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 21:04:03.103889  212382 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 21:04:03.103923  212382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 21:04:03.155810  212382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 21:04:03.972596  212382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 21:04:04.179527  212382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 21:04:04.237445  212382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 21:04:04.333884  212382 api_server.go:51] waiting for apiserver process to appear ...
	I0728 21:04:04.333979  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:04:04.845249  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:04:05.345264  212382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 21:04:05.417911  212382 api_server.go:71] duration metric: took 1.08403264s to wait for apiserver process to appear ...
	I0728 21:04:05.417947  212382 api_server.go:87] waiting for apiserver healthz status ...
	I0728 21:04:05.417962  212382 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0728 21:04:05.418356  212382 api_server.go:256] stopped: https://192.168.94.2:8444/healthz: Get "https://192.168.94.2:8444/healthz": dial tcp 192.168.94.2:8444: connect: connection refused
	I0728 21:04:05.919077  212382 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0728 21:04:03.633786  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:06.133738  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:07.835295  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:10.335437  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:09.109537  212382 api_server.go:266] https://192.168.94.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 21:04:09.109636  212382 api_server.go:102] status: https://192.168.94.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 21:04:09.418796  212382 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0728 21:04:09.423855  212382 api_server.go:266] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 21:04:09.423885  212382 api_server.go:102] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 21:04:09.919474  212382 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0728 21:04:09.924519  212382 api_server.go:266] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 21:04:09.924550  212382 api_server.go:102] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 21:04:10.419248  212382 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0728 21:04:10.424297  212382 api_server.go:266] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 21:04:10.424331  212382 api_server.go:102] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 21:04:10.918701  212382 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I0728 21:04:10.925376  212382 api_server.go:266] https://192.168.94.2:8444/healthz returned 200:
	ok
	I0728 21:04:10.932286  212382 api_server.go:140] control plane version: v1.24.3
	I0728 21:04:10.932381  212382 api_server.go:130] duration metric: took 5.514424407s to wait for apiserver health ...
	I0728 21:04:10.932402  212382 cni.go:95] Creating CNI manager for ""
	I0728 21:04:10.932418  212382 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0728 21:04:10.935481  212382 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0728 21:04:10.937107  212382 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0728 21:04:10.942351  212382 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.3/kubectl ...
	I0728 21:04:10.942379  212382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0728 21:04:11.012352  212382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0728 21:04:11.963788  212382 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 21:04:11.972163  212382 system_pods.go:59] 9 kube-system pods found
	I0728 21:04:11.972209  212382 system_pods.go:61] "coredns-6d4b75cb6d-s8wj4" [ec3bccb1-bc2b-4c57-94a3-5f2b3df05042] Running
	I0728 21:04:11.972220  212382 system_pods.go:61] "etcd-default-k8s-different-port-20220728210213-9812" [6b7f86b4-ada8-4e59-a512-07aa98ecb6d5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0728 21:04:11.972226  212382 system_pods.go:61] "kindnet-v8mqh" [f4ebd13b-5cb6-4732-86d0-be50c8984a97] Running
	I0728 21:04:11.972234  212382 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220728210213-9812" [260bad63-2de0-4fff-8f0b-4cf777a54bed] Running
	I0728 21:04:11.972238  212382 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220728210213-9812" [a6b5c20e-01b7-40ab-a24f-00690c952fe0] Running
	I0728 21:04:11.972245  212382 system_pods.go:61] "kube-proxy-xcmjh" [c76d38e6-b689-4683-9251-0269a4b0c141] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0728 21:04:11.972251  212382 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220728210213-9812" [a9f49a47-4c60-4941-99aa-7cb61a2e8c32] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 21:04:11.972262  212382 system_pods.go:61] "metrics-server-5c6f97fb75-rtkxz" [3c871ef2-daac-4441-be4c-395a0ab5fe0a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 21:04:11.972267  212382 system_pods.go:61] "storage-provisioner" [7f030d68-c433-4860-a853-4154e80e108d] Running
	I0728 21:04:11.972273  212382 system_pods.go:74] duration metric: took 8.462212ms to wait for pod list to return data ...
	I0728 21:04:11.972279  212382 node_conditions.go:102] verifying NodePressure condition ...
	I0728 21:04:11.975284  212382 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0728 21:04:11.975331  212382 node_conditions.go:123] node cpu capacity is 8
	I0728 21:04:11.975343  212382 node_conditions.go:105] duration metric: took 3.059635ms to run NodePressure ...
	I0728 21:04:11.975362  212382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 21:04:12.143066  212382 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0728 21:04:12.147796  212382 kubeadm.go:777] kubelet initialised
	I0728 21:04:12.147822  212382 kubeadm.go:778] duration metric: took 4.727964ms waiting for restarted kubelet to initialise ...
	I0728 21:04:12.147830  212382 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 21:04:12.153548  212382 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-s8wj4" in "kube-system" namespace to be "Ready" ...
	I0728 21:04:12.159222  212382 pod_ready.go:92] pod "coredns-6d4b75cb6d-s8wj4" in "kube-system" namespace has status "Ready":"True"
	I0728 21:04:12.159245  212382 pod_ready.go:81] duration metric: took 5.664265ms waiting for pod "coredns-6d4b75cb6d-s8wj4" in "kube-system" namespace to be "Ready" ...
	I0728 21:04:12.159255  212382 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
	I0728 21:04:12.721082  160802 out.go:204]   - Generating certificates and keys ...
	I0728 21:04:12.724181  160802 out.go:204]   - Booting up control plane ...
	W0728 21:04:12.726717  160802 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0728 21:02:16.240575    7625 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0728 21:04:12.726772  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0728 21:04:08.632838  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:10.633198  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:12.633913  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:12.835721  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:15.334521  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:14.174204  212382 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:16.670732  212382 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:13.465967  160802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 21:04:13.478145  160802 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 21:04:13.478204  160802 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 21:04:13.486471  160802 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 21:04:13.486522  160802 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 21:04:15.133224  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:17.133940  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:17.835303  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:20.335468  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:18.671985  212382 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:21.172229  212382 pod_ready.go:102] pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:19.632821  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:21.633402  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:22.834889  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:24.835428  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:22.671937  212382 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"True"
	I0728 21:04:22.671972  212382 pod_ready.go:81] duration metric: took 10.512710863s waiting for pod "etcd-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
	I0728 21:04:22.671991  212382 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
	I0728 21:04:22.677801  212382 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"True"
	I0728 21:04:22.677825  212382 pod_ready.go:81] duration metric: took 5.825596ms waiting for pod "kube-apiserver-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
	I0728 21:04:22.677839  212382 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
	I0728 21:04:22.683193  212382 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"True"
	I0728 21:04:22.683215  212382 pod_ready.go:81] duration metric: took 5.367248ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
	I0728 21:04:22.683228  212382 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xcmjh" in "kube-system" namespace to be "Ready" ...
	I0728 21:04:22.688178  212382 pod_ready.go:92] pod "kube-proxy-xcmjh" in "kube-system" namespace has status "Ready":"True"
	I0728 21:04:22.688203  212382 pod_ready.go:81] duration metric: took 4.967046ms waiting for pod "kube-proxy-xcmjh" in "kube-system" namespace to be "Ready" ...
	I0728 21:04:22.688216  212382 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
	I0728 21:04:22.693084  212382 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace has status "Ready":"True"
	I0728 21:04:22.693107  212382 pod_ready.go:81] duration metric: took 4.882799ms waiting for pod "kube-scheduler-default-k8s-different-port-20220728210213-9812" in "kube-system" namespace to be "Ready" ...
	I0728 21:04:22.693116  212382 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace to be "Ready" ...
	I0728 21:04:25.076038  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:27.575272  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:23.634073  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:26.132931  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:27.335220  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:29.834958  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:29.576685  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:32.076147  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:28.133764  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:30.633437  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:32.633603  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:31.835176  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:34.335022  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:34.576676  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:36.578090  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:34.634027  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:37.133763  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:36.335544  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:38.834301  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:39.076244  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:41.576639  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:39.633422  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:41.634382  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:40.834383  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:42.835687  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:45.335401  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:44.076212  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:46.076524  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:44.133217  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:46.133404  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:47.335460  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:49.335660  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:48.576528  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:51.076826  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:48.633415  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:50.634704  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:52.635331  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:51.835608  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:53.836298  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:53.575380  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:55.577372  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:55.135428  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:57.633790  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:56.335486  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:58.834779  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:04:58.075920  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:00.075971  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:02.576454  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:00.132985  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:02.133461  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:00.835767  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:03.336195  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:05.076599  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:07.575882  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:04.133808  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:06.634439  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:05.834632  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:07.835353  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:10.334649  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:10.076555  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:12.576534  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:09.133676  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:11.633586  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:12.335554  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:14.335718  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:14.578248  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:17.077350  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:14.133688  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:16.134228  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:16.835883  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:19.335699  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:19.576840  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:22.076850  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:18.634539  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:21.133578  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:21.835113  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:23.835453  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:24.576362  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:26.576614  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:23.633974  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:26.133150  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:25.835615  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:28.335674  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:29.076851  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:31.576808  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:28.134079  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:30.634432  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:32.635308  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:30.835639  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:33.336176  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:34.076741  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:36.077895  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:34.635552  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:37.133897  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:35.835747  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:37.835886  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:40.335828  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:38.576456  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:40.577025  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:39.634015  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:41.635057  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:42.835055  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:44.835444  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:43.076088  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:45.077292  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:47.577201  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:44.133805  197178 pod_ready.go:102] pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:44.627002  197178 pod_ready.go:81] duration metric: took 4m0.006468083s waiting for pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace to be "Ready" ...
	E0728 21:05:44.627036  197178 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-bs2pm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0728 21:05:44.627059  197178 pod_ready.go:38] duration metric: took 4m11.065068408s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 21:05:44.627089  197178 kubeadm.go:630] restartCluster took 4m24.29522771s
	W0728 21:05:44.627252  197178 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0728 21:05:44.627293  197178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0728 21:05:47.726265  197178 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.098952316s)
	I0728 21:05:47.726333  197178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 21:05:47.738538  197178 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 21:05:47.747960  197178 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 21:05:47.748024  197178 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 21:05:47.756857  197178 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 21:05:47.756902  197178 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 21:05:48.059979  197178 out.go:204]   - Generating certificates and keys ...
	I0728 21:05:47.335669  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:49.335864  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:50.076800  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:52.576725  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:48.943549  197178 out.go:204]   - Booting up control plane ...
	I0728 21:05:51.340520  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:53.835350  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:56.994142  197178 out.go:204]   - Configuring RBAC rules ...
	I0728 21:05:57.420708  197178 cni.go:95] Creating CNI manager for ""
	I0728 21:05:57.420742  197178 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0728 21:05:57.424463  197178 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0728 21:05:55.076387  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:57.077553  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:57.428050  197178 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0728 21:05:57.433231  197178 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.3/kubectl ...
	I0728 21:05:57.433266  197178 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0728 21:05:57.519769  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0728 21:05:55.835652  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:57.836076  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:06:00.334992  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:59.575469  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:06:01.576314  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:05:58.436250  197178 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 21:05:58.436316  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:05:58.436352  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551 minikube.k8s.io/name=no-preload-20220728205940-9812 minikube.k8s.io/updated_at=2022_07_28T21_05_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:05:58.537136  197178 ops.go:34] apiserver oom_adj: -16
	I0728 21:05:58.537135  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:05:59.124095  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:05:59.624477  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:00.124491  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:00.624113  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:01.124598  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:01.624046  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:02.124883  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:02.624466  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:02.335157  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:06:04.335918  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:06:03.578169  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:06:06.076274  212382 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rtkxz" in "kube-system" namespace has status "Ready":"False"
	I0728 21:06:03.124816  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:03.624001  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:04.123989  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:04.623949  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:05.124722  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:05.624086  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:06.124211  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:06.624712  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:07.124783  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:07.624057  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:09.456081  160802 out.go:204]   - Generating certificates and keys ...
	I0728 21:06:09.460704  160802 out.go:204]   - Booting up control plane ...
	I0728 21:06:09.463187  160802 kubeadm.go:397] StartCluster complete in 7m56.100147537s
	I0728 21:06:09.463242  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0728 21:06:09.463303  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0728 21:06:09.492218  160802 cri.go:87] found id: ""
	I0728 21:06:09.492244  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.492250  160802 logs.go:276] No container was found matching "kube-apiserver"
	I0728 21:06:09.492257  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0728 21:06:09.492327  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0728 21:06:09.519747  160802 cri.go:87] found id: ""
	I0728 21:06:09.519773  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.519779  160802 logs.go:276] No container was found matching "etcd"
	I0728 21:06:09.519786  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0728 21:06:09.519843  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0728 21:06:09.546296  160802 cri.go:87] found id: ""
	I0728 21:06:09.546331  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.546340  160802 logs.go:276] No container was found matching "coredns"
	I0728 21:06:09.546348  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0728 21:06:09.546505  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0728 21:06:09.574600  160802 cri.go:87] found id: ""
	I0728 21:06:09.574627  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.574634  160802 logs.go:276] No container was found matching "kube-scheduler"
	I0728 21:06:09.574640  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0728 21:06:09.574701  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0728 21:06:09.604664  160802 cri.go:87] found id: ""
	I0728 21:06:09.604694  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.604700  160802 logs.go:276] No container was found matching "kube-proxy"
	I0728 21:06:09.604708  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0728 21:06:09.604798  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0728 21:06:09.634288  160802 cri.go:87] found id: ""
	I0728 21:06:09.634320  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.634329  160802 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 21:06:09.634339  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0728 21:06:09.634400  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0728 21:06:09.666085  160802 cri.go:87] found id: ""
	I0728 21:06:09.666116  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.666123  160802 logs.go:276] No container was found matching "storage-provisioner"
	I0728 21:06:09.666130  160802 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0728 21:06:09.666186  160802 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0728 21:06:09.697616  160802 cri.go:87] found id: ""
	I0728 21:06:09.697646  160802 logs.go:274] 0 containers: []
	W0728 21:06:09.697656  160802 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 21:06:09.697671  160802 logs.go:123] Gathering logs for dmesg ...
	I0728 21:06:09.697688  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 21:06:09.715231  160802 logs.go:123] Gathering logs for describe nodes ...
	I0728 21:06:09.715278  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 21:06:09.774303  160802 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 21:06:09.774333  160802 logs.go:123] Gathering logs for containerd ...
	I0728 21:06:09.774345  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0728 21:06:09.822586  160802 logs.go:123] Gathering logs for container status ...
	I0728 21:06:09.822641  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 21:06:09.856873  160802 logs.go:123] Gathering logs for kubelet ...
	I0728 21:06:09.856900  160802 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0728 21:06:09.905506  160802 logs.go:138] Found kubelet problem: Jul 28 21:06:09 kubernetes-upgrade-20220728205630-9812 kubelet[11608]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	W0728 21:06:09.966501  160802 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0728 21:04:13.523562    9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0728 21:06:09.966572  160802 out.go:239] * 
	W0728 21:06:09.966810  160802 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0728 21:04:13.523562    9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 21:06:09.966848  160802 out.go:239] * 
	W0728 21:06:09.967728  160802 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 21:06:09.971858  160802 out.go:177] X Problems detected in kubelet:
	I0728 21:06:09.973883  160802 out.go:177]   Jul 28 21:06:09 kubernetes-upgrade-20220728205630-9812 kubelet[11608]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0728 21:06:09.978588  160802 out.go:177] 
	W0728 21:06:09.981692  160802 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0728 21:04:13.523562    9731 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 21:06:09.981884  160802 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0728 21:06:09.981956  160802 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0728 21:06:09.986376  160802 out.go:177] 
	I0728 21:06:06.335943  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:06:08.835019  202113 pod_ready.go:102] pod "metrics-server-7958775c-kcw7g" in "kube-system" namespace has status "Ready":"False"
	I0728 21:06:08.124523  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:08.624455  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:09.123924  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:09.624214  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:10.123963  197178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:06:10.345514  197178 kubeadm.go:1045] duration metric: took 11.909246023s to wait for elevateKubeSystemPrivileges.
	I0728 21:06:10.345554  197178 kubeadm.go:397] StartCluster complete in 4m50.062505382s
	I0728 21:06:10.345577  197178 settings.go:142] acquiring lock: {Name:mkde2c38eaf8dba18ec4a329effa3f2c12221de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 21:06:10.345717  197178 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 21:06:10.347802  197178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mka3434310bc9890bf6f7ac8ad0a69157716fb18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 21:06:10.935393  197178 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220728205940-9812" rescaled to 1
	I0728 21:06:10.935467  197178 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0728 21:06:10.938504  197178 out.go:177] * Verifying Kubernetes components...
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Thu 2022-07-28 20:57:24 UTC, end at Thu 2022-07-28 21:06:11 UTC. --
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.237062793Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.254901788Z" level=info msg="StopPodSandbox for \"this\""
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.254970195Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.272824654Z" level=info msg="StopPodSandbox for \"endpoint\""
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.272898418Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.292644300Z" level=info msg="StopPodSandbox for \"is\""
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.292716294Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.310517048Z" level=info msg="StopPodSandbox for \"deprecated,\""
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.310591757Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.330972400Z" level=info msg="StopPodSandbox for \"please\""
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.331035339Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.350318303Z" level=info msg="StopPodSandbox for \"consider\""
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.350389754Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.367442642Z" level=info msg="StopPodSandbox for \"using\""
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.367508036Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.386621034Z" level=info msg="StopPodSandbox for \"full\""
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.386675447Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.404421508Z" level=info msg="StopPodSandbox for \"URL\""
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.404476203Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.422169228Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.422226506Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.440344675Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.440415912Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.458393725Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Jul 28 21:04:13 kubernetes-upgrade-20220728205630-9812 containerd[501]: time="2022-07-28T21:04:13.458971337Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
	[  +0.000001] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
	[  +1.009641] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
	[  +0.000007] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
	[  +0.003983] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
	[  +0.000006] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
	[  +0.000049] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
	[  +0.000005] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
	[  +2.011738] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
	[  +0.000006] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
	[  +4.223600] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
	[  +0.000008] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
	[  +0.000048] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
	[  +0.003951] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
	[  +0.000007] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
	[  +8.187203] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
	[  +0.000005] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
	[  +0.000023] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-5c4ccc487b54
	[  +0.000003] ll header: 00000000: 02 42 31 10 00 0b 02 42 c0 a8 4c 02 08 00
	
	* 
	* ==> kernel <==
	*  21:06:11 up 48 min,  0 users,  load average: 1.00, 2.37, 2.31
	Linux kubernetes-upgrade-20220728205630-9812 5.15.0-1013-gcp #18~20.04.1-Ubuntu SMP Sun Jul 3 08:20:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 20:57:24 UTC, end at Thu 2022-07-28 21:06:11 UTC. --
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --storage-driver-buffer-duration duration                  Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction (default 1m0s) (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --storage-driver-db string                                 database name (default "cadvisor") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --storage-driver-host string                               database host:port (default "localhost:8086") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --storage-driver-password string                           database password (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --storage-driver-secure                                    use secure connection with database (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --storage-driver-table string                              table name (default "stats") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --storage-driver-user string                               database username (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --streaming-connection-idle-timeout duration               Maximum time a streaming connection can be idle before the connection is automatically closed. 0 indicates no timeout. Example: '5m'. Note: All connections to the kubelet server have a maximum duration of 4 hours. (default 4h0m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --sync-frequency duration                                  Max period between synchronizing running containers and config (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --system-cgroups string                                    Optional absolute name of cgroups in which to place all non-kernel processes that are not already inside a cgroup under '/'. Empty for no container. Rolling back the flag requires a reboot. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --system-reserved mapStringString                          A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi) pairs that describe resources reserved for non-kubernetes components. Currently only cpu and memory are supported. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for more detail. [default=none] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --system-reserved-cgroup string                            Absolute name of the top level cgroup that is used to manage non-kubernetes components for which compute resources were reserved via '--system-reserved' flag. Ex. '/system-reserved'. [default=''] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --tls-cert-file string                                     File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --tls-cipher-suites strings                                Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:                 Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:                 Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --tls-min-version string                                   Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --tls-private-key-file string                              File containing x509 private key matching --tls-cert-file. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --topology-manager-policy string                           Topology Manager policy to use. Possible values: 'none', 'best-effort', 'restricted', 'single-numa-node'. (default "none") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --topology-manager-scope string                            Scope to which topology hints applied. Topology Manager collects hints from Hint Providers and applies them to defined scope to ensure the pod admission. Possible values: 'container', 'pod'. (default "container") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:   -v, --v Level                                                  number for the log level verbosity
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --version version[=true]                                   Print version information and quit
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --vmodule pattern=N,...                                    comma-separated list of pattern=N settings for file-filtered logging (only works for text log format)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --volume-plugin-dir string                                 The full path of the directory in which to search for additional third party volume plugins (default "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Jul 28 21:06:11 kubernetes-upgrade-20220728205630-9812 kubelet[11889]:       --volume-stats-agg-period duration                         Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes.  To disable volume calculations, set to a negative number. (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 21:06:11.772557  220873 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220728205630-9812 -n kubernetes-upgrade-20220728205630-9812
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220728205630-9812 -n kubernetes-upgrade-20220728205630-9812: exit status 2 (494.09816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-20220728205630-9812" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220728205630-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220728205630-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220728205630-9812: (2.486251428s)
--- FAIL: TestKubernetesUpgrade (584.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (532.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220728205822-9812 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220728205822-9812 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (8m52.312395553s)

                                                
                                                
-- stdout --
	* [calico-20220728205822-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node calico-20220728205822-9812 in cluster calico-20220728205822-9812
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.24.3 on containerd 1.6.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 21:09:30.958123  257482 out.go:296] Setting OutFile to fd 1 ...
	I0728 21:09:30.958237  257482 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:09:30.958247  257482 out.go:309] Setting ErrFile to fd 2...
	I0728 21:09:30.958252  257482 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 21:09:30.958376  257482 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 21:09:30.959552  257482 out.go:303] Setting JSON to false
	I0728 21:09:30.961639  257482 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3121,"bootTime":1659039450,"procs":856,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0728 21:09:30.961716  257482 start.go:125] virtualization: kvm guest
	I0728 21:09:30.964511  257482 out.go:177] * [calico-20220728205822-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0728 21:09:30.966410  257482 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 21:09:30.966409  257482 notify.go:193] Checking for updates...
	I0728 21:09:30.969227  257482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 21:09:30.971070  257482 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 21:09:30.972928  257482 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 21:09:30.974568  257482 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0728 21:09:30.976610  257482 config.go:178] Loaded profile config "cilium-20220728205822-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 21:09:30.976737  257482 config.go:178] Loaded profile config "embed-certs-20220728210649-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 21:09:30.976833  257482 config.go:178] Loaded profile config "kindnet-20220728205821-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 21:09:30.976908  257482 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 21:09:31.022545  257482 docker.go:137] docker version: linux-20.10.17
	I0728 21:09:31.022665  257482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 21:09:31.144967  257482 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-07-28 21:09:31.0559403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 21:09:31.145069  257482 docker.go:254] overlay module found
	I0728 21:09:31.147931  257482 out.go:177] * Using the docker driver based on user configuration
	I0728 21:09:31.149446  257482 start.go:284] selected driver: docker
	I0728 21:09:31.149470  257482 start.go:808] validating driver "docker" against <nil>
	I0728 21:09:31.149492  257482 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 21:09:31.150449  257482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 21:09:31.294021  257482 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-07-28 21:09:31.188408924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 21:09:31.294196  257482 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0728 21:09:31.294431  257482 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 21:09:31.296975  257482 out.go:177] * Using Docker driver with root privileges
	I0728 21:09:31.298453  257482 cni.go:95] Creating CNI manager for "calico"
	I0728 21:09:31.298477  257482 start_flags.go:305] Found "Calico" CNI - setting NetworkPlugin=cni
	I0728 21:09:31.298490  257482 start_flags.go:310] config:
	{Name:calico-20220728205822-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:calico-20220728205822-9812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 21:09:31.300575  257482 out.go:177] * Starting control plane node calico-20220728205822-9812 in cluster calico-20220728205822-9812
	I0728 21:09:31.302239  257482 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0728 21:09:31.303840  257482 out.go:177] * Pulling base image ...
	I0728 21:09:31.305251  257482 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0728 21:09:31.305291  257482 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 21:09:31.305311  257482 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4
	I0728 21:09:31.305324  257482 cache.go:57] Caching tarball of preloaded images
	I0728 21:09:31.305613  257482 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 21:09:31.305633  257482 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on containerd
	I0728 21:09:31.305770  257482 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/config.json ...
	I0728 21:09:31.305811  257482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/config.json: {Name:mk878c1d33a792c4d910d39e5a3582f23c40398e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 21:09:31.354002  257482 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 21:09:31.354043  257482 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 21:09:31.354061  257482 cache.go:208] Successfully downloaded all kic artifacts
	I0728 21:09:31.354116  257482 start.go:370] acquiring machines lock for calico-20220728205822-9812: {Name:mk09eb38d99126d604894441cde59e925b15186a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 21:09:31.354239  257482 start.go:374] acquired machines lock for "calico-20220728205822-9812" in 103.095µs
	I0728 21:09:31.354266  257482 start.go:92] Provisioning new machine with config: &{Name:calico-20220728205822-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:calico-20220728205822-9812 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0728 21:09:31.354382  257482 start.go:132] createHost starting for "" (driver="docker")
	I0728 21:09:31.358466  257482 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0728 21:09:31.358815  257482 start.go:166] libmachine.API.Create for "calico-20220728205822-9812" (driver="docker")
	I0728 21:09:31.358858  257482 client.go:168] LocalClient.Create starting
	I0728 21:09:31.358994  257482 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem
	I0728 21:09:31.359035  257482 main.go:134] libmachine: Decoding PEM data...
	I0728 21:09:31.359055  257482 main.go:134] libmachine: Parsing certificate...
	I0728 21:09:31.359142  257482 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem
	I0728 21:09:31.359164  257482 main.go:134] libmachine: Decoding PEM data...
	I0728 21:09:31.359178  257482 main.go:134] libmachine: Parsing certificate...
	I0728 21:09:31.359584  257482 cli_runner.go:164] Run: docker network inspect calico-20220728205822-9812 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0728 21:09:31.400997  257482 cli_runner.go:211] docker network inspect calico-20220728205822-9812 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0728 21:09:31.401089  257482 network_create.go:272] running [docker network inspect calico-20220728205822-9812] to gather additional debugging logs...
	I0728 21:09:31.401117  257482 cli_runner.go:164] Run: docker network inspect calico-20220728205822-9812
	W0728 21:09:31.440817  257482 cli_runner.go:211] docker network inspect calico-20220728205822-9812 returned with exit code 1
	I0728 21:09:31.440849  257482 network_create.go:275] error running [docker network inspect calico-20220728205822-9812]: docker network inspect calico-20220728205822-9812: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220728205822-9812
	I0728 21:09:31.440870  257482 network_create.go:277] output of [docker network inspect calico-20220728205822-9812]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220728205822-9812
	
	** /stderr **
	I0728 21:09:31.440909  257482 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0728 21:09:31.484620  257482 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-263943fd194d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9e:9d:f1:9e}}
	I0728 21:09:31.485604  257482 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-6f463e99dc39 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:5c:6e:51:df}}
	I0728 21:09:31.486431  257482 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-6c7ae9caf733 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:39:0e:ed:c6}}
	I0728 21:09:31.487585  257482 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc0006ea388] misses:0}
	I0728 21:09:31.487629  257482 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 21:09:31.487639  257482 network_create.go:115] attempt to create docker network calico-20220728205822-9812 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0728 21:09:31.487692  257482 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220728205822-9812 calico-20220728205822-9812
	I0728 21:09:31.572215  257482 network_create.go:99] docker network calico-20220728205822-9812 192.168.76.0/24 created
	I0728 21:09:31.572271  257482 kic.go:106] calculated static IP "192.168.76.2" for the "calico-20220728205822-9812" container
	I0728 21:09:31.572346  257482 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0728 21:09:31.613519  257482 cli_runner.go:164] Run: docker volume create calico-20220728205822-9812 --label name.minikube.sigs.k8s.io=calico-20220728205822-9812 --label created_by.minikube.sigs.k8s.io=true
	I0728 21:09:31.651121  257482 oci.go:103] Successfully created a docker volume calico-20220728205822-9812
	I0728 21:09:31.651222  257482 cli_runner.go:164] Run: docker run --rm --name calico-20220728205822-9812-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220728205822-9812 --entrypoint /usr/bin/test -v calico-20220728205822-9812:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0728 21:09:32.317663  257482 oci.go:107] Successfully prepared a docker volume calico-20220728205822-9812
	I0728 21:09:32.317718  257482 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0728 21:09:32.317742  257482 kic.go:179] Starting extracting preloaded images to volume ...
	I0728 21:09:32.317811  257482 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220728205822-9812:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
	I0728 21:09:39.535591  257482 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220728205822-9812:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir: (7.217688382s)
	I0728 21:09:39.535628  257482 kic.go:188] duration metric: took 7.217882 seconds to extract preloaded images to volume
	W0728 21:09:39.535788  257482 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0728 21:09:39.535897  257482 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0728 21:09:39.679725  257482 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220728205822-9812 --name calico-20220728205822-9812 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220728205822-9812 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220728205822-9812 --network calico-20220728205822-9812 --ip 192.168.76.2 --volume calico-20220728205822-9812:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0728 21:09:40.171131  257482 cli_runner.go:164] Run: docker container inspect calico-20220728205822-9812 --format={{.State.Running}}
	I0728 21:09:40.214450  257482 cli_runner.go:164] Run: docker container inspect calico-20220728205822-9812 --format={{.State.Status}}
	I0728 21:09:40.255746  257482 cli_runner.go:164] Run: docker exec calico-20220728205822-9812 stat /var/lib/dpkg/alternatives/iptables
	I0728 21:09:40.346296  257482 oci.go:144] the created container "calico-20220728205822-9812" has a running status.
	I0728 21:09:40.346332  257482 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/calico-20220728205822-9812/id_rsa...
	I0728 21:09:40.919242  257482 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/calico-20220728205822-9812/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0728 21:09:41.037807  257482 cli_runner.go:164] Run: docker container inspect calico-20220728205822-9812 --format={{.State.Status}}
	I0728 21:09:41.081945  257482 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0728 21:09:41.081977  257482 kic_runner.go:114] Args: [docker exec --privileged calico-20220728205822-9812 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0728 21:09:41.219619  257482 cli_runner.go:164] Run: docker container inspect calico-20220728205822-9812 --format={{.State.Status}}
	I0728 21:09:41.270019  257482 machine.go:88] provisioning docker machine ...
	I0728 21:09:41.270058  257482 ubuntu.go:169] provisioning hostname "calico-20220728205822-9812"
	I0728 21:09:41.270122  257482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220728205822-9812
	I0728 21:09:41.315819  257482 main.go:134] libmachine: Using SSH client type: native
	I0728 21:09:41.316085  257482 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0728 21:09:41.316138  257482 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220728205822-9812 && echo "calico-20220728205822-9812" | sudo tee /etc/hostname
	I0728 21:09:41.462203  257482 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220728205822-9812
	
	I0728 21:09:41.462288  257482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220728205822-9812
	I0728 21:09:41.507050  257482 main.go:134] libmachine: Using SSH client type: native
	I0728 21:09:41.507205  257482 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0728 21:09:41.507226  257482 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220728205822-9812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220728205822-9812/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220728205822-9812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 21:09:41.631517  257482 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 21:09:41.631560  257482 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 21:09:41.631585  257482 ubuntu.go:177] setting up certificates
	I0728 21:09:41.631596  257482 provision.go:83] configureAuth start
	I0728 21:09:41.631655  257482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220728205822-9812
	I0728 21:09:41.676273  257482 provision.go:138] copyHostCerts
	I0728 21:09:41.676346  257482 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 21:09:41.676358  257482 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 21:09:41.676428  257482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1078 bytes)
	I0728 21:09:41.676842  257482 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 21:09:41.676867  257482 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 21:09:41.676951  257482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 21:09:41.677071  257482 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 21:09:41.677085  257482 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 21:09:41.677129  257482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 21:09:41.677269  257482 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.calico-20220728205822-9812 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220728205822-9812]
	I0728 21:09:42.059590  257482 provision.go:172] copyRemoteCerts
	I0728 21:09:42.059658  257482 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 21:09:42.059702  257482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220728205822-9812
	I0728 21:09:42.098659  257482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/calico-20220728205822-9812/id_rsa Username:docker}
	I0728 21:09:42.191334  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0728 21:09:42.214413  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0728 21:09:42.233271  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 21:09:42.252539  257482 provision.go:86] duration metric: configureAuth took 620.931945ms
	I0728 21:09:42.252566  257482 ubuntu.go:193] setting minikube options for container-runtime
	I0728 21:09:42.252733  257482 config.go:178] Loaded profile config "calico-20220728205822-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 21:09:42.252754  257482 machine.go:91] provisioned docker machine in 982.704636ms
	I0728 21:09:42.252763  257482 client.go:171] LocalClient.Create took 10.893864541s
	I0728 21:09:42.252787  257482 start.go:174] duration metric: libmachine.API.Create for "calico-20220728205822-9812" took 10.893971248s
	I0728 21:09:42.252800  257482 start.go:307] post-start starting for "calico-20220728205822-9812" (driver="docker")
	I0728 21:09:42.252808  257482 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 21:09:42.252863  257482 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 21:09:42.252911  257482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220728205822-9812
	I0728 21:09:42.292953  257482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/calico-20220728205822-9812/id_rsa Username:docker}
	I0728 21:09:42.387267  257482 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 21:09:42.390286  257482 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 21:09:42.390317  257482 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 21:09:42.390333  257482 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 21:09:42.390341  257482 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 21:09:42.390356  257482 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 21:09:42.390422  257482 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 21:09:42.390517  257482 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem -> 98122.pem in /etc/ssl/certs
	I0728 21:09:42.390629  257482 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 21:09:42.397996  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem --> /etc/ssl/certs/98122.pem (1708 bytes)
	I0728 21:09:42.416996  257482 start.go:310] post-start completed in 164.179636ms
	I0728 21:09:42.417441  257482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220728205822-9812
	I0728 21:09:42.461847  257482 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/config.json ...
	I0728 21:09:42.462133  257482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 21:09:42.462184  257482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220728205822-9812
	I0728 21:09:42.501632  257482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/calico-20220728205822-9812/id_rsa Username:docker}
	I0728 21:09:42.591637  257482 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 21:09:42.595642  257482 start.go:135] duration metric: createHost completed in 11.24124696s
	I0728 21:09:42.595671  257482 start.go:82] releasing machines lock for "calico-20220728205822-9812", held for 11.24141742s
	I0728 21:09:42.595766  257482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220728205822-9812
	I0728 21:09:42.631992  257482 ssh_runner.go:195] Run: systemctl --version
	I0728 21:09:42.632050  257482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220728205822-9812
	I0728 21:09:42.632088  257482 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 21:09:42.632135  257482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220728205822-9812
	I0728 21:09:42.669566  257482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/calico-20220728205822-9812/id_rsa Username:docker}
	I0728 21:09:42.674420  257482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/calico-20220728205822-9812/id_rsa Username:docker}
	I0728 21:09:42.759588  257482 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0728 21:09:42.788539  257482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 21:09:42.800340  257482 docker.go:188] disabling docker service ...
	I0728 21:09:42.800404  257482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0728 21:09:42.824867  257482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0728 21:09:42.841395  257482 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0728 21:09:42.945406  257482 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0728 21:09:43.040744  257482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0728 21:09:43.053832  257482 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 21:09:43.068969  257482 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0728 21:09:43.077757  257482 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0728 21:09:43.087443  257482 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0728 21:09:43.096139  257482 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I0728 21:09:43.105137  257482 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0728 21:09:43.115459  257482 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0728 21:09:43.132433  257482 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0728 21:09:43.140855  257482 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0728 21:09:43.148165  257482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 21:09:43.229506  257482 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0728 21:09:43.318342  257482 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0728 21:09:43.318418  257482 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0728 21:09:43.322845  257482 start.go:471] Will wait 60s for crictl version
	I0728 21:09:43.322965  257482 ssh_runner.go:195] Run: sudo crictl version
	I0728 21:09:43.355052  257482 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-07-28T21:09:43Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0728 21:09:54.403000  257482 ssh_runner.go:195] Run: sudo crictl version
	I0728 21:09:54.430049  257482 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0728 21:09:54.430120  257482 ssh_runner.go:195] Run: containerd --version
	I0728 21:09:54.465085  257482 ssh_runner.go:195] Run: containerd --version
	I0728 21:09:54.498166  257482 out.go:177] * Preparing Kubernetes v1.24.3 on containerd 1.6.6 ...
	I0728 21:09:54.499816  257482 cli_runner.go:164] Run: docker network inspect calico-20220728205822-9812 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0728 21:09:54.534578  257482 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0728 21:09:54.538168  257482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 21:09:54.548285  257482 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0728 21:09:54.548352  257482 ssh_runner.go:195] Run: sudo crictl images --output json
	I0728 21:09:54.572768  257482 containerd.go:547] all images are preloaded for containerd runtime.
	I0728 21:09:54.572791  257482 containerd.go:461] Images already preloaded, skipping extraction
	I0728 21:09:54.572842  257482 ssh_runner.go:195] Run: sudo crictl images --output json
	I0728 21:09:54.598791  257482 containerd.go:547] all images are preloaded for containerd runtime.
	I0728 21:09:54.598818  257482 cache_images.go:84] Images are preloaded, skipping loading
	I0728 21:09:54.598862  257482 ssh_runner.go:195] Run: sudo crictl info
	I0728 21:09:54.623814  257482 cni.go:95] Creating CNI manager for "calico"
	I0728 21:09:54.623840  257482 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 21:09:54.623852  257482 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220728205822-9812 NodeName:calico-20220728205822-9812 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 21:09:54.623965  257482 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-20220728205822-9812"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 21:09:54.624043  257482 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-20220728205822-9812 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:calico-20220728205822-9812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0728 21:09:54.624093  257482 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 21:09:54.631644  257482 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 21:09:54.631705  257482 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 21:09:54.638918  257482 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (519 bytes)
	I0728 21:09:54.652458  257482 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 21:09:54.665966  257482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2055 bytes)
	I0728 21:09:54.679232  257482 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0728 21:09:54.682166  257482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 21:09:54.691774  257482 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812 for IP: 192.168.76.2
	I0728 21:09:54.691886  257482 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 21:09:54.691933  257482 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 21:09:54.692001  257482 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/client.key
	I0728 21:09:54.692018  257482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/client.crt with IP's: []
	I0728 21:09:54.966474  257482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/client.crt ...
	I0728 21:09:54.966506  257482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/client.crt: {Name:mk8c0f28f35510931fc701fc0c9abb34453e466e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 21:09:54.966731  257482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/client.key ...
	I0728 21:09:54.966747  257482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/client.key: {Name:mk5d758e89a14a2277d84c99021dd96b60ee040f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 21:09:54.966862  257482 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/apiserver.key.31bdca25
	I0728 21:09:54.966915  257482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0728 21:09:55.184128  257482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/apiserver.crt.31bdca25 ...
	I0728 21:09:55.184173  257482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/apiserver.crt.31bdca25: {Name:mkf5ce8125cdb531a2b183396553456be55f9eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 21:09:55.184416  257482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/apiserver.key.31bdca25 ...
	I0728 21:09:55.184435  257482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/apiserver.key.31bdca25: {Name:mk0f9706635ba258ad77a163785bd0537c0169b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 21:09:55.184562  257482 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/apiserver.crt
	I0728 21:09:55.184661  257482 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/apiserver.key
	I0728 21:09:55.184735  257482 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/proxy-client.key
	I0728 21:09:55.184758  257482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/proxy-client.crt with IP's: []
	I0728 21:09:55.393116  257482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/proxy-client.crt ...
	I0728 21:09:55.393147  257482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/proxy-client.crt: {Name:mke1544d2e153b1efedf117daca3ddac027e22c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 21:09:55.393354  257482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/proxy-client.key ...
	I0728 21:09:55.393374  257482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/proxy-client.key: {Name:mk725e748c3a07654b35f0523e8dbf5aa01fd589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 21:09:55.393590  257482 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812.pem (1338 bytes)
	W0728 21:09:55.393633  257482 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812_empty.pem, impossibly tiny 0 bytes
	I0728 21:09:55.393646  257482 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 21:09:55.393679  257482 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1078 bytes)
	I0728 21:09:55.393724  257482 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 21:09:55.393773  257482 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 21:09:55.393830  257482 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem (1708 bytes)
	I0728 21:09:55.394447  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 21:09:55.415873  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 21:09:55.436206  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 21:09:55.456806  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728205822-9812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 21:09:55.475547  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 21:09:55.494190  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0728 21:09:55.513098  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 21:09:55.531935  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0728 21:09:55.550349  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 21:09:55.569085  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9812.pem --> /usr/share/ca-certificates/9812.pem (1338 bytes)
	I0728 21:09:55.588814  257482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98122.pem --> /usr/share/ca-certificates/98122.pem (1708 bytes)
	I0728 21:09:55.607310  257482 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 21:09:55.621306  257482 ssh_runner.go:195] Run: openssl version
	I0728 21:09:55.626413  257482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 21:09:55.634485  257482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 21:09:55.637749  257482 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 20:27 /usr/share/ca-certificates/minikubeCA.pem
	I0728 21:09:55.637816  257482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 21:09:55.642678  257482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 21:09:55.650804  257482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9812.pem && ln -fs /usr/share/ca-certificates/9812.pem /etc/ssl/certs/9812.pem"
	I0728 21:09:55.659233  257482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9812.pem
	I0728 21:09:55.662613  257482 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 20:32 /usr/share/ca-certificates/9812.pem
	I0728 21:09:55.662670  257482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9812.pem
	I0728 21:09:55.667711  257482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9812.pem /etc/ssl/certs/51391683.0"
	I0728 21:09:55.676099  257482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98122.pem && ln -fs /usr/share/ca-certificates/98122.pem /etc/ssl/certs/98122.pem"
	I0728 21:09:55.684313  257482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98122.pem
	I0728 21:09:55.687570  257482 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 20:32 /usr/share/ca-certificates/98122.pem
	I0728 21:09:55.687632  257482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98122.pem
	I0728 21:09:55.692652  257482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98122.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 21:09:55.700762  257482 kubeadm.go:395] StartCluster: {Name:calico-20220728205822-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:calico-20220728205822-9812 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 21:09:55.700883  257482 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0728 21:09:55.700924  257482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0728 21:09:55.726592  257482 cri.go:87] found id: ""
	I0728 21:09:55.726671  257482 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 21:09:55.734115  257482 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 21:09:55.741634  257482 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 21:09:55.741700  257482 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 21:09:55.749259  257482 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 21:09:55.749319  257482 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 21:09:56.032977  257482 out.go:204]   - Generating certificates and keys ...
	I0728 21:09:59.276544  257482 out.go:204]   - Booting up control plane ...
	I0728 21:10:09.351962  257482 out.go:204]   - Configuring RBAC rules ...
	I0728 21:10:09.819869  257482 cni.go:95] Creating CNI manager for "calico"
	I0728 21:10:09.822162  257482 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0728 21:10:09.824167  257482 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.3/kubectl ...
	I0728 21:10:09.824201  257482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202050 bytes)
	I0728 21:10:09.848139  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0728 21:10:11.368702  257482 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.520527067s)
	I0728 21:10:11.368746  257482 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 21:10:11.368862  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551 minikube.k8s.io/name=calico-20220728205822-9812 minikube.k8s.io/updated_at=2022_07_28T21_10_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:11.368864  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:11.375910  257482 ops.go:34] apiserver oom_adj: -16
	I0728 21:10:11.508386  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:12.079232  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:12.579066  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:13.079650  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:13.579015  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:14.079789  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:14.579014  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:15.079606  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:15.579037  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:16.079867  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:16.579473  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:17.079763  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:17.579014  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:18.079633  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:18.579811  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:19.079824  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:19.579447  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:20.079891  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:20.579978  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:21.079410  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:21.579038  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:22.079784  257482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 21:10:22.429240  257482 kubeadm.go:1045] duration metric: took 11.060429325s to wait for elevateKubeSystemPrivileges.
	I0728 21:10:22.429279  257482 kubeadm.go:397] StartCluster complete in 26.728526921s
	I0728 21:10:22.429303  257482 settings.go:142] acquiring lock: {Name:mkde2c38eaf8dba18ec4a329effa3f2c12221de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 21:10:22.429438  257482 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 21:10:22.431534  257482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mka3434310bc9890bf6f7ac8ad0a69157716fb18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 21:10:23.028037  257482 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220728205822-9812" rescaled to 1
	I0728 21:10:23.028098  257482 start.go:211] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0728 21:10:23.031719  257482 out.go:177] * Verifying Kubernetes components...
	I0728 21:10:23.028162  257482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 21:10:23.028184  257482 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0728 21:10:23.028405  257482 config.go:178] Loaded profile config "calico-20220728205822-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 21:10:23.033281  257482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 21:10:23.033327  257482 addons.go:65] Setting storage-provisioner=true in profile "calico-20220728205822-9812"
	I0728 21:10:23.033343  257482 addons.go:65] Setting default-storageclass=true in profile "calico-20220728205822-9812"
	I0728 21:10:23.033352  257482 addons.go:153] Setting addon storage-provisioner=true in "calico-20220728205822-9812"
	W0728 21:10:23.033362  257482 addons.go:162] addon storage-provisioner should already be in state true
	I0728 21:10:23.033373  257482 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220728205822-9812"
	I0728 21:10:23.033410  257482 host.go:66] Checking if "calico-20220728205822-9812" exists ...
	I0728 21:10:23.033736  257482 cli_runner.go:164] Run: docker container inspect calico-20220728205822-9812 --format={{.State.Status}}
	I0728 21:10:23.033929  257482 cli_runner.go:164] Run: docker container inspect calico-20220728205822-9812 --format={{.State.Status}}
	I0728 21:10:23.084545  257482 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 21:10:23.085016  257482 addons.go:153] Setting addon default-storageclass=true in "calico-20220728205822-9812"
	W0728 21:10:23.086631  257482 addons.go:162] addon default-storageclass should already be in state true
	I0728 21:10:23.086572  257482 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 21:10:23.086716  257482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 21:10:23.086686  257482 host.go:66] Checking if "calico-20220728205822-9812" exists ...
	I0728 21:10:23.086788  257482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220728205822-9812
	I0728 21:10:23.087471  257482 cli_runner.go:164] Run: docker container inspect calico-20220728205822-9812 --format={{.State.Status}}
	I0728 21:10:23.133607  257482 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 21:10:23.133636  257482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 21:10:23.133695  257482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220728205822-9812
	I0728 21:10:23.140795  257482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/calico-20220728205822-9812/id_rsa Username:docker}
	I0728 21:10:23.144210  257482 node_ready.go:35] waiting up to 5m0s for node "calico-20220728205822-9812" to be "Ready" ...
	I0728 21:10:23.145272  257482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0728 21:10:23.147993  257482 node_ready.go:49] node "calico-20220728205822-9812" has status "Ready":"True"
	I0728 21:10:23.148016  257482 node_ready.go:38] duration metric: took 3.771706ms waiting for node "calico-20220728205822-9812" to be "Ready" ...
	I0728 21:10:23.148027  257482 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 21:10:23.157623  257482 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace to be "Ready" ...
	I0728 21:10:23.180468  257482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/calico-20220728205822-9812/id_rsa Username:docker}
	I0728 21:10:23.417174  257482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 21:10:23.429645  257482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 21:10:24.613328  257482 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.468011857s)
	I0728 21:10:24.613379  257482 start.go:809] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0728 21:10:24.646519  257482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.229290422s)
	I0728 21:10:24.646526  257482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.216832895s)
	I0728 21:10:24.648675  257482 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0728 21:10:24.650496  257482 addons.go:414] enableAddons completed in 1.622310154s
	I0728 21:10:25.169951  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:27.170369  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:29.671179  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:32.170580  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:34.670556  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:37.170784  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:39.669633  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:41.670342  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:43.670962  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:46.170508  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:48.671256  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:51.169942  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:53.170644  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:55.669352  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:57.670231  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:10:59.670696  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:02.170112  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:04.205062  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:06.669523  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:09.169560  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:11.170061  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:13.170252  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:15.670103  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:17.670221  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:19.670781  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:22.169608  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:24.169690  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:26.670168  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:29.169886  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:31.669822  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:33.670982  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:36.170501  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:38.170552  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:40.171364  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:42.670229  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:45.169533  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:47.170038  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:49.170227  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:51.668913  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:53.669065  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:55.669738  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:11:57.670286  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:00.169576  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:02.169715  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:04.170107  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:06.669633  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:09.169343  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:11.169410  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:13.169714  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:15.170157  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:17.669255  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:19.670073  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:21.670303  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:24.170001  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:26.170196  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:28.670152  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:31.169646  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:33.672404  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:36.170579  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:38.669670  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:40.669784  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:42.669859  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:45.169073  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:47.169848  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:49.170001  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:51.669578  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:54.170239  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:56.669545  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:12:58.669626  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:01.169926  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:03.669553  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:05.670158  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:07.670551  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:10.169717  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:12.669650  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:15.170096  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:17.171121  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:19.670171  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:22.169010  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:24.170138  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:26.170282  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:28.171748  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:30.669581  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:32.670960  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:35.169677  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:37.169917  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:39.669422  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:42.170134  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:44.170329  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:46.668924  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:48.669908  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:50.670255  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:53.169093  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:55.170693  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:13:57.669829  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:00.169880  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:02.669724  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:04.669975  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:07.170092  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:09.670190  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:12.169565  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:14.169682  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:16.669996  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:19.170317  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:21.669313  257482 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:23.174058  257482 pod_ready.go:81] duration metric: took 4m0.016399108s waiting for pod "calico-kube-controllers-c44b4545-dbthk" in "kube-system" namespace to be "Ready" ...
	E0728 21:14:23.174082  257482 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0728 21:14:23.174090  257482 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-5kr6p" in "kube-system" namespace to be "Ready" ...
	I0728 21:14:25.186021  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:27.186157  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:29.686097  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:32.186236  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:34.686283  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:37.186184  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:39.186508  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:41.685687  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:44.186420  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:46.186788  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:48.685355  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:50.686253  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:53.185186  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:55.186802  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:14:57.685865  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:00.187539  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:02.686265  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:04.686319  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:07.185937  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:09.685906  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:12.186126  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:14.186191  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:16.685893  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:19.185345  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:21.185417  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:23.186745  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:25.686190  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:28.185777  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:30.686074  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:33.186170  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:35.686205  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:38.185783  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:40.186088  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:42.685437  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:44.686122  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:47.186194  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:49.685899  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:51.685930  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:53.686323  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:56.185636  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:15:58.186257  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:00.186593  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:02.686596  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:05.186565  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:07.685786  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:09.685958  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:11.686081  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:14.186134  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:16.186238  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:18.685626  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:21.185581  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:23.685261  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:25.686138  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:28.185329  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:30.186347  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:32.684993  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:34.685492  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:37.185686  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:39.187763  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:41.686750  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:43.714670  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:46.185518  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:48.686338  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:50.688331  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:53.186580  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:55.686257  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:16:57.686412  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:00.186429  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:02.684879  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:04.685785  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:07.184615  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:09.185787  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:11.685641  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:14.186858  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:16.686289  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:19.185829  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:21.186102  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:23.685282  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:26.185523  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:28.685320  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:30.686004  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:33.185390  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:35.185836  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:37.685193  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:39.685349  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:42.185549  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:44.185792  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:46.685481  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:49.186248  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:51.186338  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:53.684921  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:55.685750  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:17:58.185758  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:18:00.685719  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:18:03.185060  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:18:05.185984  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:18:07.685825  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:18:10.185075  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:18:12.684958  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:18:14.685402  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:18:16.685491  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:18:19.185333  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:18:21.185741  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:18:23.189788  257482 pod_ready.go:102] pod "calico-node-5kr6p" in "kube-system" namespace has status "Ready":"False"
	I0728 21:18:23.189812  257482 pod_ready.go:81] duration metric: took 4m0.015716181s waiting for pod "calico-node-5kr6p" in "kube-system" namespace to be "Ready" ...
	E0728 21:18:23.189820  257482 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0728 21:18:23.189832  257482 pod_ready.go:38] duration metric: took 8m0.04179422s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 21:18:23.191599  257482 out.go:177] 
	W0728 21:18:23.192854  257482 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0728 21:18:23.192868  257482 out.go:239] * 
	* 
	W0728 21:18:23.193624  257482 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 21:18:23.195119  257482 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (532.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (368.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:10:33.325076    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:10:33.330390    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:10:33.340679    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:10:33.360977    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:10:33.401282    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:10:33.481623    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:10:33.642048    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:10:33.962622    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:10:34.603521    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:10:35.884083    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16940321s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.145130027s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:11:14.286640    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137857402s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:11:21.459416    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 21:11:25.089948    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
E0728 21:11:25.095237    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
E0728 21:11:25.105537    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
E0728 21:11:25.125804    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
E0728 21:11:25.166174    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
E0728 21:11:25.246508    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
E0728 21:11:25.406965    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
E0728 21:11:25.727516    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
E0728 21:11:26.368687    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
E0728 21:11:27.649000    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
E0728 21:11:30.209978    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140963244s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:11:35.330274    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.163814968s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:11:55.247281    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:12:00.472064    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.124028936s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129778495s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:12:47.012479    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126753662s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131148109s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:14:01.447728    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130390179s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0728 21:14:16.007391    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12286819s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0728 21:15:18.251226    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:16:21.460077    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 21:16:25.089382    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kindnet-20220728205821-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127493887s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kindnet/DNS (368.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (307.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146777182s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:12:06.051705    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131694368s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136114269s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12421526s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.122949751s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0728 21:13:13.052811    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:13:13.058124    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:13:13.068402    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:13:13.088704    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:13:13.129665    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:13:13.209935    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:13:13.370454    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:13:13.691285    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:13:14.331438    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:13:15.611696    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:13:17.168548    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:13:18.172876    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:13:23.293802    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147750637s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0728 21:13:33.534400    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15188917s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0728 21:13:54.014979    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:13:56.328076    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
E0728 21:13:56.333346    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
E0728 21:13:56.343634    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
E0728 21:13:56.363926    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
E0728 21:13:56.404214    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
E0728 21:13:56.484519    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
E0728 21:13:56.645016    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
E0728 21:13:56.965762    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
E0728 21:13:57.425420    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 21:13:57.606685    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
E0728 21:13:58.887005    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:14:06.568983    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
E0728 21:14:08.932992    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128149315s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0728 21:14:16.809783    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
E0728 21:14:24.504489    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 21:14:34.975567    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:14:37.290537    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136545635s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:15:33.325642    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:15:36.931016    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
E0728 21:15:36.936271    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
E0728 21:15:36.946538    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
E0728 21:15:36.966811    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
E0728 21:15:37.007151    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
E0728 21:15:37.087482    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
E0728 21:15:37.247887    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
E0728 21:15:37.568516    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
E0728 21:15:38.209112    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
E0728 21:15:39.489978    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.119802734s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0728 21:15:42.051005    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
E0728 21:15:47.171603    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
E0728 21:15:56.895957    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:15:57.412753    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
E0728 21:16:01.009486    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:16:17.893522    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13472603s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (307.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (299.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:21:38.072155    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
E0728 21:21:38.077422    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
E0728 21:21:38.087652    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
E0728 21:21:38.107945    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
E0728 21:21:38.148189    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
E0728 21:21:38.228448    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
E0728 21:21:38.388796    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
E0728 21:21:38.709358    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
E0728 21:21:39.350160    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
E0728 21:21:40.630379    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
E0728 21:21:43.191258    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
E0728 21:21:48.311973    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129863146s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:21:58.552666    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125372832s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:22:19.033474    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.117890227s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.1136907s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:22:54.756060    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728205821-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.120120321s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:22:59.994320    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
E0728 21:23:13.052941    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129517187s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.111838274s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:23:56.327841    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
E0728 21:23:57.425407    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 21:23:59.054976    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132371669s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0728 21:24:16.007471    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
E0728 21:24:21.915534    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728205820-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126045986s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0728 21:25:10.911425    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728205821-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133039768s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0728 21:25:33.325545    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
E0728 21:25:36.930856    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory
E0728 21:25:38.596710    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728205821-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
E0728 21:26:21.460440    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 21:26:25.090200    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127013177s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (299.54s)

                                                
                                    

Test pass (245/273)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.52
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.24.3/json-events 4.6
11 TestDownloadOnly/v1.24.3/preload-exists 0
15 TestDownloadOnly/v1.24.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.37
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
18 TestDownloadOnlyKic 3.86
19 TestBinaryMirror 0.96
20 TestOffline 84.95
22 TestAddons/Setup 119.06
24 TestAddons/parallel/Registry 21.01
25 TestAddons/parallel/Ingress 24.9
26 TestAddons/parallel/MetricsServer 5.55
27 TestAddons/parallel/HelmTiller 14.85
29 TestAddons/parallel/CSI 42.91
30 TestAddons/parallel/Headlamp 9.01
32 TestAddons/serial/GCPAuth 41.71
33 TestAddons/StoppedEnableDisable 20.49
34 TestCertOptions 43.33
35 TestCertExpiration 225.18
37 TestForceSystemdFlag 40.63
38 TestForceSystemdEnv 44.1
39 TestKVMDriverInstallOrUpdate 5.7
43 TestErrorSpam/setup 24.97
44 TestErrorSpam/start 1.07
45 TestErrorSpam/status 1.24
46 TestErrorSpam/pause 1.75
47 TestErrorSpam/unpause 1.75
48 TestErrorSpam/stop 20.46
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 46.6
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 15.68
55 TestFunctional/serial/KubeContext 0.04
56 TestFunctional/serial/KubectlGetPods 0.06
59 TestFunctional/serial/CacheCmd/cache/add_remote 4.35
60 TestFunctional/serial/CacheCmd/cache/add_local 2.13
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
62 TestFunctional/serial/CacheCmd/cache/list 0.06
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
64 TestFunctional/serial/CacheCmd/cache/cache_reload 2.17
65 TestFunctional/serial/CacheCmd/cache/delete 0.13
66 TestFunctional/serial/MinikubeKubectlCmd 0.11
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
68 TestFunctional/serial/ExtraConfig 38.49
69 TestFunctional/serial/ComponentHealth 0.06
70 TestFunctional/serial/LogsCmd 1.1
71 TestFunctional/serial/LogsFileCmd 1.12
73 TestFunctional/parallel/ConfigCmd 0.5
74 TestFunctional/parallel/DashboardCmd 8.76
75 TestFunctional/parallel/DryRun 0.68
76 TestFunctional/parallel/InternationalLanguage 0.28
77 TestFunctional/parallel/StatusCmd 1.41
80 TestFunctional/parallel/ServiceCmd 12.22
81 TestFunctional/parallel/ServiceCmdConnect 10.85
82 TestFunctional/parallel/AddonsCmd 0.22
83 TestFunctional/parallel/PersistentVolumeClaim 35.96
85 TestFunctional/parallel/SSHCmd 0.81
86 TestFunctional/parallel/CpCmd 1.91
87 TestFunctional/parallel/MySQL 24.73
88 TestFunctional/parallel/FileSync 0.42
89 TestFunctional/parallel/CertSync 2.55
93 TestFunctional/parallel/NodeLabels 0.05
95 TestFunctional/parallel/NonActiveRuntimeDisabled 0.83
97 TestFunctional/parallel/ImageCommands/ImageListShort 0.37
98 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
99 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
100 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.33
101 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
102 TestFunctional/parallel/ImageCommands/ImageListJson 0.41
103 TestFunctional/parallel/ImageCommands/ImageListYaml 0.4
104 TestFunctional/parallel/ImageCommands/ImageBuild 3.28
105 TestFunctional/parallel/ImageCommands/Setup 1.48
106 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.93
108 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
110 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 20.22
111 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.48
112 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.62
113 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.79
114 TestFunctional/parallel/ImageCommands/ImageRemove 0.77
115 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.61
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/MountCmd/any-port 13.15
123 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.03
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
125 TestFunctional/parallel/ProfileCmd/profile_list 0.55
126 TestFunctional/parallel/MountCmd/specific-port 2.4
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
128 TestFunctional/parallel/Version/short 0.09
129 TestFunctional/parallel/Version/components 0.66
130 TestFunctional/delete_addon-resizer_images 0.11
131 TestFunctional/delete_my-image_image 0.03
132 TestFunctional/delete_minikube_cached_images 0.03
135 TestIngressAddonLegacy/StartLegacyK8sCluster 83.79
137 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.69
138 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.4
139 TestIngressAddonLegacy/serial/ValidateIngressAddons 37.57
142 TestJSONOutput/start/Command 46.23
143 TestJSONOutput/start/Audit 0
145 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/pause/Command 0.72
149 TestJSONOutput/pause/Audit 0
151 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/unpause/Command 0.65
155 TestJSONOutput/unpause/Audit 0
157 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/stop/Command 20.22
161 TestJSONOutput/stop/Audit 0
163 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
165 TestErrorJSONOutput 0.33
167 TestKicCustomNetwork/create_custom_network 36.21
168 TestKicCustomNetwork/use_default_bridge_network 31.45
169 TestKicExistingNetwork 31.12
170 TestKicCustomSubnet 31.78
171 TestMainNoArgs 0.07
172 TestMinikubeProfile 79.96
175 TestMountStart/serial/StartWithMountFirst 5.82
176 TestMountStart/serial/VerifyMountFirst 0.4
177 TestMountStart/serial/StartWithMountSecond 5.74
178 TestMountStart/serial/VerifyMountSecond 0.4
179 TestMountStart/serial/DeleteFirst 1.96
180 TestMountStart/serial/VerifyMountPostDelete 0.4
181 TestMountStart/serial/Stop 1.31
182 TestMountStart/serial/RestartStopped 7.15
183 TestMountStart/serial/VerifyMountPostStop 0.4
186 TestMultiNode/serial/FreshStart2Nodes 103.84
187 TestMultiNode/serial/DeployApp2Nodes 4.68
188 TestMultiNode/serial/PingHostFrom2Pods 0.99
189 TestMultiNode/serial/AddNode 33.71
190 TestMultiNode/serial/ProfileList 0.44
191 TestMultiNode/serial/CopyFile 14.44
192 TestMultiNode/serial/StopNode 2.73
193 TestMultiNode/serial/StartAfterStop 31.53
194 TestMultiNode/serial/RestartKeepsNodes 156.38
195 TestMultiNode/serial/DeleteNode 5.52
196 TestMultiNode/serial/StopMultiNode 40.8
197 TestMultiNode/serial/RestartMultiNode 84.13
198 TestMultiNode/serial/ValidateNameConflict 40.37
203 TestPreload 118.04
205 TestScheduledStopUnix 114.85
208 TestInsufficientStorage 16.93
209 TestRunningBinaryUpgrade 87.89
212 TestMissingContainerUpgrade 166.23
214 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
215 TestStoppedBinaryUpgrade/Setup 0.5
216 TestNoKubernetes/serial/StartWithK8s 61.16
217 TestStoppedBinaryUpgrade/Upgrade 141.68
218 TestNoKubernetes/serial/StartWithStopK8s 18.5
219 TestNoKubernetes/serial/Start 5.3
220 TestNoKubernetes/serial/VerifyK8sNotRunning 0.47
221 TestNoKubernetes/serial/ProfileList 8.43
222 TestNoKubernetes/serial/Stop 4.43
223 TestNoKubernetes/serial/StartNoArgs 6.4
224 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
232 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
234 TestPause/serial/Start 65.32
242 TestNetworkPlugins/group/false 0.65
246 TestPause/serial/SecondStartNoReconfiguration 16.43
247 TestPause/serial/Pause 0.86
248 TestPause/serial/VerifyStatus 0.47
249 TestPause/serial/Unpause 0.84
250 TestPause/serial/PauseAgain 0.96
251 TestPause/serial/DeletePaused 2.96
252 TestPause/serial/VerifyDeletedResources 0.93
254 TestStartStop/group/old-k8s-version/serial/FirstStart 125.7
256 TestStartStop/group/no-preload/serial/FirstStart 52.38
257 TestStartStop/group/no-preload/serial/DeployApp 8.33
258 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.75
259 TestStartStop/group/no-preload/serial/Stop 20.37
260 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
261 TestStartStop/group/no-preload/serial/SecondStart 313.76
262 TestStartStop/group/old-k8s-version/serial/DeployApp 9.38
263 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.73
264 TestStartStop/group/old-k8s-version/serial/Stop 20.39
265 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
266 TestStartStop/group/old-k8s-version/serial/SecondStart 436.84
268 TestStartStop/group/default-k8s-different-port/serial/FirstStart 59.61
269 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.49
270 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.73
271 TestStartStop/group/default-k8s-different-port/serial/Stop 20.34
272 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.25
273 TestStartStop/group/default-k8s-different-port/serial/SecondStart 310.57
275 TestStartStop/group/newest-cni/serial/FirstStart 51.22
276 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 19.02
277 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
278 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.48
279 TestStartStop/group/no-preload/serial/Pause 4
281 TestStartStop/group/embed-certs/serial/FirstStart 58.85
282 TestStartStop/group/newest-cni/serial/DeployApp 0
283 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.72
284 TestStartStop/group/newest-cni/serial/Stop 20.42
285 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
286 TestStartStop/group/newest-cni/serial/SecondStart 32.34
287 TestStartStop/group/embed-certs/serial/DeployApp 9.33
288 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.15
289 TestStartStop/group/embed-certs/serial/Stop 20.58
290 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
291 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
292 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.61
293 TestStartStop/group/newest-cni/serial/Pause 3.46
294 TestNetworkPlugins/group/auto/Start 48.04
295 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
296 TestStartStop/group/embed-certs/serial/SecondStart 557.77
297 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 11.02
298 TestNetworkPlugins/group/auto/KubeletFlags 0.5
299 TestNetworkPlugins/group/auto/NetCatPod 9.3
300 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.09
301 TestNetworkPlugins/group/auto/DNS 0.15
302 TestNetworkPlugins/group/auto/Localhost 0.12
303 TestNetworkPlugins/group/auto/HairPin 0.12
304 TestNetworkPlugins/group/kindnet/Start 62.62
305 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.49
306 TestStartStop/group/default-k8s-different-port/serial/Pause 4
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
309 TestNetworkPlugins/group/cilium/Start 78.43
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.62
311 TestStartStop/group/old-k8s-version/serial/Pause 4.69
313 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
314 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
315 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
317 TestNetworkPlugins/group/cilium/ControllerPod 5.02
318 TestNetworkPlugins/group/cilium/KubeletFlags 0.41
319 TestNetworkPlugins/group/cilium/NetCatPod 10.86
320 TestNetworkPlugins/group/cilium/DNS 0.15
321 TestNetworkPlugins/group/cilium/Localhost 0.12
322 TestNetworkPlugins/group/cilium/HairPin 0.12
323 TestNetworkPlugins/group/enable-default-cni/Start 40.67
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.23
327 TestNetworkPlugins/group/bridge/Start 286.02
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.4
331 TestStartStop/group/embed-certs/serial/Pause 3.12
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
333 TestNetworkPlugins/group/bridge/NetCatPod 9.18
x
+
TestDownloadOnly/v1.16.0/json-events (13.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220728202652-9812 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220728202652-9812 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.517935024s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (13.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220728202652-9812
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220728202652-9812: exit status 85 (92.695997ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	| Command |               Args                |              Profile              |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p        | download-only-20220728202652-9812 | jenkins | v1.26.0 | 28 Jul 22 20:26 UTC |          |
	|         | download-only-20220728202652-9812 |                                   |         |         |                     |          |
	|         | --force --alsologtostderr         |                                   |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                                   |         |         |                     |          |
	|         | --container-runtime=containerd    |                                   |         |         |                     |          |
	|         | --driver=docker                   |                                   |         |         |                     |          |
	|         | --container-runtime=containerd    |                                   |         |         |                     |          |
	|---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 20:26:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 20:26:53.061483    9825 out.go:296] Setting OutFile to fd 1 ...
	I0728 20:26:53.062066    9825 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:26:53.062085    9825 out.go:309] Setting ErrFile to fd 2...
	I0728 20:26:53.062093    9825 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:26:53.062319    9825 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	W0728 20:26:53.062541    9825 root.go:310] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/config/config.json: no such file or directory
	I0728 20:26:53.063784    9825 out.go:303] Setting JSON to true
	I0728 20:26:53.064663    9825 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":563,"bootTime":1659039450,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0728 20:26:53.064740    9825 start.go:125] virtualization: kvm guest
	I0728 20:26:53.067736    9825 out.go:97] [download-only-20220728202652-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0728 20:26:53.069716    9825 out.go:169] MINIKUBE_LOCATION=14555
	I0728 20:26:53.067955    9825 notify.go:193] Checking for updates...
	W0728 20:26:53.067964    9825 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball: no such file or directory
	I0728 20:26:53.073128    9825 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 20:26:53.075254    9825 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 20:26:53.077609    9825 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 20:26:53.079503    9825 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0728 20:26:53.082413    9825 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0728 20:26:53.082614    9825 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 20:26:53.122043    9825 docker.go:137] docker version: linux-20.10.17
	I0728 20:26:53.122153    9825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 20:26:53.965717    9825 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2022-07-28 20:26:53.151258236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 20:26:53.965827    9825 docker.go:254] overlay module found
	I0728 20:26:53.968283    9825 out.go:97] Using the docker driver based on user configuration
	I0728 20:26:53.968314    9825 start.go:284] selected driver: docker
	I0728 20:26:53.968336    9825 start.go:808] validating driver "docker" against <nil>
	I0728 20:26:53.968429    9825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 20:26:54.084976    9825 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-07-28 20:26:53.998818488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 20:26:54.085161    9825 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0728 20:26:54.085641    9825 start_flags.go:377] Using suggested 8000MB memory alloc based on sys=32103MB, container=32103MB
	I0728 20:26:54.085746    9825 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 20:26:54.088121    9825 out.go:169] Using Docker driver with root privileges
	I0728 20:26:54.089652    9825 cni.go:95] Creating CNI manager for ""
	I0728 20:26:54.089691    9825 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0728 20:26:54.089708    9825 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0728 20:26:54.089714    9825 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0728 20:26:54.089719    9825 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni
	I0728 20:26:54.089728    9825 start_flags.go:310] config:
	{Name:download-only-20220728202652-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220728202652-9812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 20:26:54.091682    9825 out.go:97] Starting control plane node download-only-20220728202652-9812 in cluster download-only-20220728202652-9812
	I0728 20:26:54.091720    9825 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0728 20:26:54.093334    9825 out.go:97] Pulling base image ...
	I0728 20:26:54.093374    9825 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0728 20:26:54.093439    9825 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 20:26:54.126572    9825 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 to local cache
	I0728 20:26:54.126948    9825 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local cache directory
	I0728 20:26:54.127067    9825 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 to local cache
	I0728 20:26:54.198768    9825 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0728 20:26:54.198805    9825 cache.go:57] Caching tarball of preloaded images
	I0728 20:26:54.199038    9825 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0728 20:26:54.201898    9825 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0728 20:26:54.201931    9825 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0728 20:26:54.313644    9825 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0728 20:26:56.615813    9825 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0728 20:26:56.615903    9825 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0728 20:26:57.521978    9825 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0728 20:26:57.522416    9825 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/download-only-20220728202652-9812/config.json ...
	I0728 20:26:57.522468    9825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/download-only-20220728202652-9812/config.json: {Name:mkffdd091096bab87ed96e4e56a8b55ca8f8b659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 20:26:57.522667    9825 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0728 20:26:57.522959    9825 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220728202652-9812"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/json-events (4.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220728202652-9812 --force --alsologtostderr --kubernetes-version=v1.24.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220728202652-9812 --force --alsologtostderr --kubernetes-version=v1.24.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.595566255s)
--- PASS: TestDownloadOnly/v1.24.3/json-events (4.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/preload-exists
--- PASS: TestDownloadOnly/v1.24.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220728202652-9812
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220728202652-9812: exit status 85 (85.275965ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	| Command |               Args                |              Profile              |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p        | download-only-20220728202652-9812 | jenkins | v1.26.0 | 28 Jul 22 20:26 UTC |          |
	|         | download-only-20220728202652-9812 |                                   |         |         |                     |          |
	|         | --force --alsologtostderr         |                                   |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                                   |         |         |                     |          |
	|         | --container-runtime=containerd    |                                   |         |         |                     |          |
	|         | --driver=docker                   |                                   |         |         |                     |          |
	|         | --container-runtime=containerd    |                                   |         |         |                     |          |
	| start   | -o=json --download-only -p        | download-only-20220728202652-9812 | jenkins | v1.26.0 | 28 Jul 22 20:27 UTC |          |
	|         | download-only-20220728202652-9812 |                                   |         |         |                     |          |
	|         | --force --alsologtostderr         |                                   |         |         |                     |          |
	|         | --kubernetes-version=v1.24.3      |                                   |         |         |                     |          |
	|         | --container-runtime=containerd    |                                   |         |         |                     |          |
	|         | --driver=docker                   |                                   |         |         |                     |          |
	|         | --container-runtime=containerd    |                                   |         |         |                     |          |
	|---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 20:27:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 20:27:06.680692    9989 out.go:296] Setting OutFile to fd 1 ...
	I0728 20:27:06.680840    9989 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:27:06.680849    9989 out.go:309] Setting ErrFile to fd 2...
	I0728 20:27:06.680854    9989 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:27:06.680961    9989 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	W0728 20:27:06.681082    9989 root.go:310] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/config/config.json: no such file or directory
	I0728 20:27:06.681479    9989 out.go:303] Setting JSON to true
	I0728 20:27:06.682306    9989 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":577,"bootTime":1659039450,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0728 20:27:06.682449    9989 start.go:125] virtualization: kvm guest
	I0728 20:27:06.685189    9989 out.go:97] [download-only-20220728202652-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0728 20:27:06.685342    9989 notify.go:193] Checking for updates...
	I0728 20:27:06.687351    9989 out.go:169] MINIKUBE_LOCATION=14555
	I0728 20:27:06.689038    9989 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 20:27:06.690904    9989 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 20:27:06.692496    9989 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 20:27:06.694367    9989 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220728202652-9812"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.24.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220728202652-9812
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.86s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220728202712-9812 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220728202712-9812 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (2.713667594s)
helpers_test.go:175: Cleaning up "download-docker-20220728202712-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220728202712-9812
--- PASS: TestDownloadOnlyKic (3.86s)

                                                
                                    
x
+
TestBinaryMirror (0.96s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220728202715-9812 --alsologtostderr --binary-mirror http://127.0.0.1:41297 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-20220728202715-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220728202715-9812
--- PASS: TestBinaryMirror (0.96s)

                                                
                                    
x
+
TestOffline (84.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20220728205505-9812 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20220728205505-9812 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m22.205156372s)
helpers_test.go:175: Cleaning up "offline-containerd-20220728205505-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20220728205505-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220728205505-9812: (2.740814122s)
--- PASS: TestOffline (84.95s)

                                                
                                    
x
+
TestAddons/Setup (119.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220728202716-9812 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220728202716-9812 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m59.055349347s)
--- PASS: TestAddons/Setup (119.06s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 9.291273ms
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-5q445" [6f2ad9a5-c474-4835-9cb3-30a16c2eaf86] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01053128s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-tvcmw" [45c7d332-bbff-4fb4-8027-2188453578b3] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009864928s
addons_test.go:292: (dbg) Run:  kubectl --context addons-20220728202716-9812 delete po -l run=registry-test --now
addons_test.go:297: (dbg) Run:  kubectl --context addons-20220728202716-9812 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-20220728202716-9812 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.962836357s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220728202716-9812 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:340: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220728202716-9812 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (21.01s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (24.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-20220728202716-9812 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:164: (dbg) Done: kubectl --context addons-20220728202716-9812 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (1.566977722s)
addons_test.go:184: (dbg) Run:  kubectl --context addons-20220728202716-9812 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context addons-20220728202716-9812 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [90d363b2-21d1-4963-864e-961d497da4a9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [90d363b2-21d1-4963-864e-961d497da4a9] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.010912976s
addons_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220728202716-9812 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
2022/07/28 20:29:36 [DEBUG] GET http://192.168.49.2:5000

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:238: (dbg) Run:  kubectl --context addons-20220728202716-9812 replace --force -f testdata/ingress-dns-example-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220728202716-9812 ip
addons_test.go:249: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220728202716-9812 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:258: (dbg) Done: out/minikube-linux-amd64 -p addons-20220728202716-9812 addons disable ingress-dns --alsologtostderr -v=1: (1.33297489s)
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220728202716-9812 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p addons-20220728202716-9812 addons disable ingress --alsologtostderr -v=1: (7.575418864s)
--- PASS: TestAddons/parallel/Ingress (24.90s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 8.94561ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-8595bd7d4c-hnnfz" [9a02638e-eff0-40ba-8391-998babc64dce] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010681833s
addons_test.go:367: (dbg) Run:  kubectl --context addons-20220728202716-9812 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:384: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220728202716-9812 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.55s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.85s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 2.666786ms
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-c7d76457b-h82rt" [ddc54ff2-ca0c-44dc-b24a-7585140f244f] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.010441094s
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220728202716-9812 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-20220728202716-9812 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.44606336s)
addons_test.go:442: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220728202716-9812 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 12.195096ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-20220728202716-9812 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220728202716-9812 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220728202716-9812 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-20220728202716-9812 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [cdebd990-cc7a-4c09-86b7-150a381f01f9] Pending
helpers_test.go:342: "task-pv-pod" [cdebd990-cc7a-4c09-86b7-150a381f01f9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [cdebd990-cc7a-4c09-86b7-150a381f01f9] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.006855117s
addons_test.go:536: (dbg) Run:  kubectl --context addons-20220728202716-9812 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220728202716-9812 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220728202716-9812 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-20220728202716-9812 delete pod task-pv-pod
addons_test.go:552: (dbg) Run:  kubectl --context addons-20220728202716-9812 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-20220728202716-9812 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220728202716-9812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-20220728202716-9812 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [950e3c6a-1e07-483a-bad4-8ca0ad7ed05d] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [950e3c6a-1e07-483a-bad4-8ca0ad7ed05d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [950e3c6a-1e07-483a-bad4-8ca0ad7ed05d] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 15.006588242s
addons_test.go:578: (dbg) Run:  kubectl --context addons-20220728202716-9812 delete pod task-pv-pod-restore
addons_test.go:582: (dbg) Run:  kubectl --context addons-20220728202716-9812 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-20220728202716-9812 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220728202716-9812 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:590: (dbg) Done: out/minikube-linux-amd64 -p addons-20220728202716-9812 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.973410112s)
addons_test.go:594: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220728202716-9812 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.91s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (9.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-20220728202716-9812 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-866f5bd7bc-gjk29" [80dab10a-cdeb-4179-9ea7-bd568d548c16] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-866f5bd7bc-gjk29" [80dab10a-cdeb-4179-9ea7-bd568d548c16] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-866f5bd7bc-gjk29" [80dab10a-cdeb-4179-9ea7-bd568d548c16] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 8.06692145s
--- PASS: TestAddons/parallel/Headlamp (9.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (41.71s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-20220728202716-9812 create -f testdata/busybox.yaml
addons_test.go:612: (dbg) Run:  kubectl --context addons-20220728202716-9812 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [e563d572-511a-47d8-90ba-2c7020bdbea3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [e563d572-511a-47d8-90ba-2c7020bdbea3] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.007133774s
addons_test.go:624: (dbg) Run:  kubectl --context addons-20220728202716-9812 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-20220728202716-9812 describe sa gcp-auth-test
addons_test.go:674: (dbg) Run:  kubectl --context addons-20220728202716-9812 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220728202716-9812 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-linux-amd64 -p addons-20220728202716-9812 addons disable gcp-auth --alsologtostderr -v=1: (6.187568331s)
addons_test.go:703: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220728202716-9812 addons enable gcp-auth
addons_test.go:703: (dbg) Done: out/minikube-linux-amd64 -p addons-20220728202716-9812 addons enable gcp-auth: (2.198609781s)
addons_test.go:709: (dbg) Run:  kubectl --context addons-20220728202716-9812 apply -f testdata/private-image.yaml
addons_test.go:716: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7c74db7cd9-9gnq5" [0d7bf9ab-597f-4876-93e8-10c82398c480] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7c74db7cd9-9gnq5" [0d7bf9ab-597f-4876-93e8-10c82398c480] Running
addons_test.go:716: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 16.006507963s
addons_test.go:722: (dbg) Run:  kubectl --context addons-20220728202716-9812 apply -f testdata/private-image-eu.yaml
addons_test.go:727: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-545d57c67f-rw5ng" [0ce58969-3bd2-45fa-82eb-421f8ce6b306] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-545d57c67f-rw5ng" [0ce58969-3bd2-45fa-82eb-421f8ce6b306] Running
addons_test.go:727: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 8.006385736s
--- PASS: TestAddons/serial/GCPAuth (41.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.49s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220728202716-9812
addons_test.go:134: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220728202716-9812: (20.257555439s)
addons_test.go:138: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220728202716-9812
addons_test.go:142: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220728202716-9812
--- PASS: TestAddons/StoppedEnableDisable (20.49s)

                                                
                                    
x
+
TestCertOptions (43.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220728205835-9812 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220728205835-9812 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (40.170677093s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220728205835-9812 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
E0728 20:59:16.007864    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220728205835-9812 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220728205835-9812 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220728205835-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220728205835-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220728205835-9812: (2.248119136s)
--- PASS: TestCertOptions (43.33s)

                                                
                                    
x
+
TestCertExpiration (225.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220728205827-9812 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220728205827-9812 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (27.219129559s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220728205827-9812 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220728205827-9812 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (15.244369656s)
helpers_test.go:175: Cleaning up "cert-expiration-20220728205827-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220728205827-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220728205827-9812: (2.710875507s)
--- PASS: TestCertExpiration (225.18s)

                                                
                                    
x
+
TestForceSystemdFlag (40.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220728205900-9812 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220728205900-9812 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.688895149s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220728205900-9812 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220728205900-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220728205900-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220728205900-9812: (2.524131839s)
--- PASS: TestForceSystemdFlag (40.63s)

                                                
                                    
x
+
TestForceSystemdEnv (44.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220728205751-9812 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220728205751-9812 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.682883116s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220728205751-9812 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-20220728205751-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220728205751-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220728205751-9812: (5.921824976s)
--- PASS: TestForceSystemdEnv (44.10s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.7s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.70s)

                                                
                                    
x
+
TestErrorSpam/setup (24.97s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220728203107-9812 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220728203107-9812 --driver=docker  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220728203107-9812 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220728203107-9812 --driver=docker  --container-runtime=containerd: (24.971218054s)
--- PASS: TestErrorSpam/setup (24.97s)

                                                
                                    
x
+
TestErrorSpam/start (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 start --dry-run
--- PASS: TestErrorSpam/start (1.07s)

                                                
                                    
x
+
TestErrorSpam/status (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 status
--- PASS: TestErrorSpam/status (1.24s)

                                                
                                    
x
+
TestErrorSpam/pause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 pause
--- PASS: TestErrorSpam/pause (1.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (20.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 stop: (20.158417547s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220728203107-9812 --log_dir /tmp/nospam-20220728203107-9812 stop
--- PASS: TestErrorSpam/stop (20.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/test/nested/copy/9812/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220728203204-9812 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2160: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220728203204-9812 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (46.600598084s)
--- PASS: TestFunctional/serial/StartWithProxy (46.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220728203204-9812 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220728203204-9812 --alsologtostderr -v=8: (15.678050645s)
functional_test.go:655: soft start took 15.678680343s for "functional-20220728203204-9812" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220728203204-9812 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220728203204-9812 cache add k8s.gcr.io/pause:3.1: (1.507823798s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220728203204-9812 cache add k8s.gcr.io/pause:3.3: (1.610434487s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220728203204-9812 cache add k8s.gcr.io/pause:latest: (1.23037338s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220728203204-9812 /tmp/TestFunctionalserialCacheCmdcacheadd_local2096179062/001
functional_test.go:1081: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 cache add minikube-local-cache-test:functional-20220728203204-9812
functional_test.go:1081: (dbg) Done: out/minikube-linux-amd64 -p functional-20220728203204-9812 cache add minikube-local-cache-test:functional-20220728203204-9812: (1.856009808s)
functional_test.go:1086: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 cache delete minikube-local-cache-test:functional-20220728203204-9812
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220728203204-9812
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (358.266548ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-linux-amd64 -p functional-20220728203204-9812 cache reload: (1.084013144s)
functional_test.go:1155: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 kubectl -- --context functional-20220728203204-9812 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220728203204-9812 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220728203204-9812 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220728203204-9812 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.48719968s)
functional_test.go:753: restart took 38.487327339s for "functional-20220728203204-9812" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220728203204-9812 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 logs
functional_test.go:1228: (dbg) Done: out/minikube-linux-amd64 -p functional-20220728203204-9812 logs: (1.104438867s)
--- PASS: TestFunctional/serial/LogsCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 logs --file /tmp/TestFunctionalserialLogsFileCmd3362852827/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-linux-amd64 -p functional-20220728203204-9812 logs --file /tmp/TestFunctionalserialLogsFileCmd3362852827/001/logs.txt: (1.12222379s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220728203204-9812 config get cpus: exit status 14 (78.26727ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220728203204-9812 config get cpus: exit status 14 (84.166369ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220728203204-9812 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220728203204-9812 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 47544: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220728203204-9812 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220728203204-9812 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (287.494214ms)

                                                
                                                
-- stdout --
	* [functional-20220728203204-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 20:34:34.840776   46237 out.go:296] Setting OutFile to fd 1 ...
	I0728 20:34:34.840933   46237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:34:34.840947   46237 out.go:309] Setting ErrFile to fd 2...
	I0728 20:34:34.840960   46237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:34:34.841108   46237 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 20:34:34.841829   46237 out.go:303] Setting JSON to false
	I0728 20:34:34.843428   46237 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1025,"bootTime":1659039450,"procs":429,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0728 20:34:34.843510   46237 start.go:125] virtualization: kvm guest
	I0728 20:34:34.846257   46237 out.go:177] * [functional-20220728203204-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0728 20:34:34.847864   46237 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 20:34:34.849236   46237 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 20:34:34.850581   46237 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 20:34:34.851901   46237 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 20:34:34.853245   46237 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0728 20:34:34.855036   46237 config.go:178] Loaded profile config "functional-20220728203204-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 20:34:34.855622   46237 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 20:34:34.900035   46237 docker.go:137] docker version: linux-20.10.17
	I0728 20:34:34.900136   46237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 20:34:35.035976   46237 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:40 SystemTime:2022-07-28 20:34:34.935038399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 20:34:35.036113   46237 docker.go:254] overlay module found
	I0728 20:34:35.038667   46237 out.go:177] * Using the docker driver based on existing profile
	I0728 20:34:35.040114   46237 start.go:284] selected driver: docker
	I0728 20:34:35.040145   46237 start.go:808] validating driver "docker" against &{Name:functional-20220728203204-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220728203204-9812 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-sec
urity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 20:34:35.040304   46237 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 20:34:35.042690   46237 out.go:177] 
	W0728 20:34:35.044076   46237 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0728 20:34:35.045493   46237 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220728203204-9812 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220728203204-9812 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220728203204-9812 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (282.806701ms)

                                                
                                                
-- stdout --
	* [functional-20220728203204-9812] minikube v1.26.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 20:34:34.269379   45849 out.go:296] Setting OutFile to fd 1 ...
	I0728 20:34:34.269540   45849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:34:34.269552   45849 out.go:309] Setting ErrFile to fd 2...
	I0728 20:34:34.269559   45849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:34:34.269813   45849 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 20:34:34.270554   45849 out.go:303] Setting JSON to false
	I0728 20:34:34.271915   45849 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1024,"bootTime":1659039450,"procs":428,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0728 20:34:34.271990   45849 start.go:125] virtualization: kvm guest
	I0728 20:34:34.274751   45849 out.go:177] * [functional-20220728203204-9812] minikube v1.26.0 sur Ubuntu 20.04 (kvm/amd64)
	I0728 20:34:34.276604   45849 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 20:34:34.278123   45849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 20:34:34.279559   45849 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 20:34:34.280931   45849 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 20:34:34.282267   45849 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0728 20:34:34.283915   45849 config.go:178] Loaded profile config "functional-20220728203204-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 20:34:34.284406   45849 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 20:34:34.329200   45849 docker.go:137] docker version: linux-20.10.17
	I0728 20:34:34.329297   45849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 20:34:34.456294   45849 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:40 SystemTime:2022-07-28 20:34:34.365574832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 20:34:34.456452   45849 docker.go:254] overlay module found
	I0728 20:34:34.458989   45849 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0728 20:34:34.460380   45849 start.go:284] selected driver: docker
	I0728 20:34:34.460419   45849 start.go:808] validating driver "docker" against &{Name:functional-20220728203204-9812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220728203204-9812 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-sec
urity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 20:34:34.460564   45849 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 20:34:34.462997   45849 out.go:177] 
	W0728 20:34:34.464532   45849 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0728 20:34:34.465970   45849 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220728203204-9812 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220728203204-9812 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-ctd4v" [4f51c321-2883-480a-9ded-c1e86998c57a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54c4b5c49f-ctd4v" [4f51c321-2883-480a-9ded-c1e86998c57a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 9.006050874s
functional_test.go:1448: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1475: found endpoint: https://192.168.49.2:31126
functional_test.go:1490: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1510: found endpoint for hello-node: http://192.168.49.2:31126
--- PASS: TestFunctional/parallel/ServiceCmd (12.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220728203204-9812 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220728203204-9812 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-578cdc45cb-2pgg2" [de0f70c5-0fad-4734-870f-27ffe488dcb5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-578cdc45cb-2pgg2" [de0f70c5-0fad-4734-870f-27ffe488dcb5] Running
E0728 20:34:26.248522    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.007141615s
functional_test.go:1578: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1584: found endpoint for hello-node-connect: http://192.168.49.2:30597
functional_test.go:1604: http://192.168.49.2:30597: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-578cdc45cb-2pgg2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30597
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.85s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1631: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [fb39e70e-b612-4d80-a6d3-02792c8e8818] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009856521s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220728203204-9812 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220728203204-9812 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220728203204-9812 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220728203204-9812 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [dd8e7b3b-da30-4556-bd25-2da8d60c6ff6] Pending
helpers_test.go:342: "sp-pod" [dd8e7b3b-da30-4556-bd25-2da8d60c6ff6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [dd8e7b3b-da30-4556-bd25-2da8d60c6ff6] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.00929154s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220728203204-9812 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220728203204-9812 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220728203204-9812 delete -f testdata/storage-provisioner/pod.yaml: (2.011650705s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220728203204-9812 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [9792ba49-e9ac-499c-9d65-6fb6f8055652] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [9792ba49-e9ac-499c-9d65-6fb6f8055652] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:342: "sp-pod" [9792ba49-e9ac-499c-9d65-6fb6f8055652] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006970388s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220728203204-9812 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh -n functional-20220728203204-9812 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 cp functional-20220728203204-9812:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1377199792/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh -n functional-20220728203204-9812 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220728203204-9812 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-67f7d69d8b-w4tcv" [73ff9713-4c69-471b-a7e1-ea81a8ef1c2a] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-w4tcv" [73ff9713-4c69-471b-a7e1-ea81a8ef1c2a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-w4tcv" [73ff9713-4c69-471b-a7e1-ea81a8ef1c2a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.013780686s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220728203204-9812 exec mysql-67f7d69d8b-w4tcv -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220728203204-9812 exec mysql-67f7d69d8b-w4tcv -- mysql -ppassword -e "show databases;": exit status 1 (289.888626ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220728203204-9812 exec mysql-67f7d69d8b-w4tcv -- mysql -ppassword -e "show databases;"
E0728 20:34:17.286930    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220728203204-9812 exec mysql-67f7d69d8b-w4tcv -- mysql -ppassword -e "show databases;": exit status 1 (324.512692ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220728203204-9812 exec mysql-67f7d69d8b-w4tcv -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220728203204-9812 exec mysql-67f7d69d8b-w4tcv -- mysql -ppassword -e "show databases;": exit status 1 (295.343987ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220728203204-9812 exec mysql-67f7d69d8b-w4tcv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.73s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/9812/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo cat /etc/test/nested/copy/9812/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/9812.pem within VM

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo cat /etc/ssl/certs/9812.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/9812.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo cat /usr/share/ca-certificates/9812.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/98122.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo cat /etc/ssl/certs/98122.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/98122.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo cat /usr/share/ca-certificates/98122.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220728203204-9812 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo systemctl is-active docker": exit status 1 (411.370922ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo systemctl is-active crio": exit status 1 (420.648318ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.7
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.24.3
k8s.gcr.io/kube-proxy:v1.24.3
k8s.gcr.io/kube-controller-manager:v1.24.3
k8s.gcr.io/kube-apiserver:v1.24.3
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220728203204-9812
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220728203204-9812
docker.io/kindest/kindnetd:v20220510-4929dd75
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls --format table
E0728 20:34:36.489098    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/library/mysql                     | 5.7                            | sha256:314749 | 128MB  |
| docker.io/library/nginx                     | latest                         | sha256:670dcc | 56.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/kube-apiserver                   | v1.24.3                        | sha256:d521dd | 33.8MB |
| k8s.gcr.io/kube-proxy                       | v1.24.3                        | sha256:2ae1ba | 39.5MB |
| k8s.gcr.io/kube-scheduler                   | v1.24.3                        | sha256:3a5aa3 | 15.5MB |
| k8s.gcr.io/pause                            | latest                         | sha256:350b16 | 72.3kB |
| docker.io/library/minikube-local-cache-test | functional-20220728203204-9812 | sha256:b2de1b | 1.74kB |
| k8s.gcr.io/pause                            | 3.1                            | sha256:da86e6 | 315kB  |
| k8s.gcr.io/pause                            | 3.7                            | sha256:221177 | 311kB  |
| k8s.gcr.io/kube-controller-manager          | v1.24.3                        | sha256:586c11 | 31MB   |
| k8s.gcr.io/echoserver                       | 1.8                            | sha256:82e4c8 | 46.2MB |
| k8s.gcr.io/etcd                             | 3.5.3-0                        | sha256:aebe75 | 102MB  |
| gcr.io/google-containers/addon-resizer      | functional-20220728203204-9812 | sha256:ffd4cf | 10.8MB |
| docker.io/library/nginx                     | alpine                         | sha256:e46bcc | 10.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | sha256:a4ca41 | 13.6MB |
| k8s.gcr.io/pause                            | 3.3                            | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20220510-4929dd75             | sha256:6fb66c | 45.2MB |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls --format json:
[{"id":"sha256:3147495b3a5ce957dee2319099a8808c1418e0b0a2c82c9b2396c5fb4b688509","repoDigests":["docker.io/library/mysql@sha256:b3a86578a582617214477d91e47e850f9e18df0b5d1644fb2d96d91a340b8972"],"repoTags":["docker.io/library/mysql:5.7"],"size":"128384456"},{"id":"sha256:3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0","repoDigests":["k8s.gcr.io/kube-scheduler@sha256:e199523298224cd9f2a9a43c7c2c37fa57aff87648ed1e1de9984eba6f6005f0"],"repoTags":["k8s.gcr.io/kube-scheduler:v1.24.3"],"size":"15488985"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:670dcc86b69df89a9d5a9e1a7ae5b8f67619c1c74e19de8a35f57d6c06505fd4","repoDigests":["docker.io/library/nginx@sha256:1761fb5661e4d77e107427d8012ad3a5955007d997e0f4a3d41acc9ff20467c7"],"repoTags":["docker.io/library/nginx:latest"],"size":"56729488"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","re
poDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db","repoDigests":["k8s.gcr.io/kube-apiserver@sha256:a04609b85962da7e6531d32b75f652b4fb9f5fe0b0ee0aa160856faad8ec5d96"],"repoTags":["k8s.gcr.io/kube-apiserver:v1.24.3"],"size":"33796659"},{"id":"sha256:586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f","repoDigests":["k8s.gcr.io/kube-controller-manager@sha256:f504eead8b8674ebc9067370ef51abbdc531b4a81813bfe464abccb8c76b6a53"],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.24.3"],"size":"31035788"},{"id":"sha256:b2de1b2a7956d75d1434a905ca9005ade66a3990150539f07b240faebc337386","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220728203204-9812"],"size":"1737"},{"id":"sha256:e46bcc69753105cfd75905056666b92cee0d3e96ebf134b19f1b38de53cda93e","repoDi
gests":["docker.io/library/nginx@sha256:87fb6f4040ffd52dd616f360b8520ed4482930ea75417182ad3f76c4aaadf24f"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10205078"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":["k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e"],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"13585107"},{"id":"sha256:2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302","repoDigests":["k8s.gcr.io/kube-proxy@sha256:c1b135231b5b1a6799346cd701da4b59e5b7ef8e694ec7b04fb23b8dbe144137"],"repoTags":["k8s.gcr.io/kube-proxy:v1.24.3"],"size":"39515847"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd9
06b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:6fb66cd78abfe9e0735a9a751f2586b7984e0d279e87fa8dd175781de6595627","repoDigests":["docker.io/kindest/kindnetd@sha256:39494477a3fa001aae716b704a8991f4f62d2ccf1aaaa65692da6c805b18856c"],"repoTags":["docker.io/kindest/kindnetd:v20220510-4929dd75"],"size":"45239873"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220728203204-9812"],"size":"10823156"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","repoDigests":["k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5"],"re
poTags":["k8s.gcr.io/etcd:3.5.3-0"],"size":"102143581"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165","repoDigests":["k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c"],"repoTags":["k8s.gcr.io/pause:3.7"],"size":"311278"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls --format yaml:
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0
repoDigests:
- k8s.gcr.io/kube-scheduler@sha256:e199523298224cd9f2a9a43c7c2c37fa57aff87648ed1e1de9984eba6f6005f0
repoTags:
- k8s.gcr.io/kube-scheduler:v1.24.3
size: "15488985"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f
repoDigests:
- k8s.gcr.io/kube-controller-manager@sha256:f504eead8b8674ebc9067370ef51abbdc531b4a81813bfe464abccb8c76b6a53
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.24.3
size: "31035788"
- id: sha256:6fb66cd78abfe9e0735a9a751f2586b7984e0d279e87fa8dd175781de6595627
repoDigests:
- docker.io/kindest/kindnetd@sha256:39494477a3fa001aae716b704a8991f4f62d2ccf1aaaa65692da6c805b18856c
repoTags:
- docker.io/kindest/kindnetd:v20220510-4929dd75
size: "45239873"
- id: sha256:670dcc86b69df89a9d5a9e1a7ae5b8f67619c1c74e19de8a35f57d6c06505fd4
repoDigests:
- docker.io/library/nginx@sha256:1761fb5661e4d77e107427d8012ad3a5955007d997e0f4a3d41acc9ff20467c7
repoTags:
- docker.io/library/nginx:latest
size: "56729488"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220728203204-9812
size: "10823156"
- id: sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests:
- k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "13585107"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b
repoDigests:
- k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5
repoTags:
- k8s.gcr.io/etcd:3.5.3-0
size: "102143581"
- id: sha256:d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db
repoDigests:
- k8s.gcr.io/kube-apiserver@sha256:a04609b85962da7e6531d32b75f652b4fb9f5fe0b0ee0aa160856faad8ec5d96
repoTags:
- k8s.gcr.io/kube-apiserver:v1.24.3
size: "33796659"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:3147495b3a5ce957dee2319099a8808c1418e0b0a2c82c9b2396c5fb4b688509
repoDigests:
- docker.io/library/mysql@sha256:b3a86578a582617214477d91e47e850f9e18df0b5d1644fb2d96d91a340b8972
repoTags:
- docker.io/library/mysql:5.7
size: "128384456"
- id: sha256:e46bcc69753105cfd75905056666b92cee0d3e96ebf134b19f1b38de53cda93e
repoDigests:
- docker.io/library/nginx@sha256:87fb6f4040ffd52dd616f360b8520ed4482930ea75417182ad3f76c4aaadf24f
repoTags:
- docker.io/library/nginx:alpine
size: "10205078"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302
repoDigests:
- k8s.gcr.io/kube-proxy@sha256:c1b135231b5b1a6799346cd701da4b59e5b7ef8e694ec7b04fb23b8dbe144137
repoTags:
- k8s.gcr.io/kube-proxy:v1.24.3
size: "39515847"
- id: sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165
repoDigests:
- k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c
repoTags:
- k8s.gcr.io/pause:3.7
size: "311278"
- id: sha256:b2de1b2a7956d75d1434a905ca9005ade66a3990150539f07b240faebc337386
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220728203204-9812
size: "1737"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh pgrep buildkitd: exit status 1 (577.963532ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image build -t localhost/my-image:functional-20220728203204-9812 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220728203204-9812 image build -t localhost/my-image:functional-20220728203204-9812 testdata/build: (2.458416958s)
functional_test.go:318: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20220728203204-9812 image build -t localhost/my-image:functional-20220728203204-9812 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.3s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:a3cda1c01a32662639c23314ef665690e3c6465a594304d26e8a4ae6dc3036f0 0.0s done
#8 exporting config sha256:87cbcd408e485e08bc08baf3bbc804a4b798b3e2d0ada6edde1deca28af929c2 done
#8 naming to localhost/my-image:functional-20220728203204-9812 done
#8 DONE 0.1s
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls
2022/07/28 20:34:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.432422227s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220728203204-9812
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220728203204-9812

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220728203204-9812 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220728203204-9812: (4.644916831s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220728203204-9812 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220728203204-9812 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [a32a22fa-4546-42d3-85eb-fc9f733b3539] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [a32a22fa-4546-42d3-85eb-fc9f733b3539] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 20.038810468s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220728203204-9812

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220728203204-9812 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220728203204-9812: (5.193400743s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.434217057s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220728203204-9812
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220728203204-9812

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220728203204-9812 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220728203204-9812: (5.810009567s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls
E0728 20:34:16.007034    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
E0728 20:34:16.012691    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
E0728 20:34:16.022943    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
E0728 20:34:16.043213    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
E0728 20:34:16.083457    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
E0728 20:34:16.163752    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image save gcr.io/google-containers/addon-resizer:functional-20220728203204-9812 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
E0728 20:34:16.324899    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
E0728 20:34:16.645845    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p functional-20220728203204-9812 image save gcr.io/google-containers/addon-resizer:functional-20220728203204-9812 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.792527563s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image rm gcr.io/google-containers/addon-resizer:functional-20220728203204-9812
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls
E0728 20:34:18.567271    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-linux-amd64 -p functional-20220728203204-9812 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.364436994s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220728203204-9812 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.103.60.201 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220728203204-9812 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220728203204-9812 /tmp/TestFunctionalparallelMountCmdany-port1321744638/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1659040459734705901" to /tmp/TestFunctionalparallelMountCmdany-port1321744638/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1659040459734705901" to /tmp/TestFunctionalparallelMountCmdany-port1321744638/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1659040459734705901" to /tmp/TestFunctionalparallelMountCmdany-port1321744638/001/test-1659040459734705901
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (476.494061ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "findmnt -T /mount-9p | grep 9p"
E0728 20:34:21.127492    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 28 20:34 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 28 20:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 28 20:34 test-1659040459734705901
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh cat /mount-9p/test-1659040459734705901

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220728203204-9812 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [23966a60-b9c1-46e6-a852-ee99c446b904] Pending
helpers_test.go:342: "busybox-mount" [23966a60-b9c1-46e6-a852-ee99c446b904] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [23966a60-b9c1-46e6-a852-ee99c446b904] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [23966a60-b9c1-46e6-a852-ee99c446b904] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [23966a60-b9c1-46e6-a852-ee99c446b904] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.006719222s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220728203204-9812 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220728203204-9812 /tmp/TestFunctionalparallelMountCmdany-port1321744638/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220728203204-9812
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220728203204-9812

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220728203204-9812
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "470.181002ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1324: Took "77.180706ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220728203204-9812 /tmp/TestFunctionalparallelMountCmdspecific-port4199387850/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (453.664149ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220728203204-9812 /tmp/TestFunctionalparallelMountCmdspecific-port4199387850/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh "sudo umount -f /mount-9p": exit status 1 (435.878318ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-20220728203204-9812 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220728203204-9812 /tmp/TestFunctionalparallelMountCmdspecific-port4199387850/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: Took "428.344243ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1374: Took "94.65369ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220728203204-9812 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220728203204-9812
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220728203204-9812
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220728203204-9812
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (83.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220728203447-9812 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0728 20:34:56.969560    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
E0728 20:35:37.930393    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220728203447-9812 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m23.793100999s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (83.79s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220728203447-9812 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220728203447-9812 addons enable ingress --alsologtostderr -v=5: (9.694118631s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.69s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220728203447-9812 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (37.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:164: (dbg) Run:  kubectl --context ingress-addon-legacy-20220728203447-9812 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:164: (dbg) Done: kubectl --context ingress-addon-legacy-20220728203447-9812 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.689512945s)
addons_test.go:184: (dbg) Run:  kubectl --context ingress-addon-legacy-20220728203447-9812 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-20220728203447-9812 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [add01688-7506-4c65-908e-2dd9421e5c40] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [add01688-7506-4c65-908e-2dd9421e5c40] Running
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.005986926s
addons_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220728203447-9812 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Run:  kubectl --context ingress-addon-legacy-20220728203447-9812 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220728203447-9812 ip
addons_test.go:249: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220728203447-9812 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:258: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220728203447-9812 addons disable ingress-dns --alsologtostderr -v=1: (4.137052966s)
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220728203447-9812 addons disable ingress --alsologtostderr -v=1
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220728203447-9812 addons disable ingress --alsologtostderr -v=1: (7.31291398s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (37.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.23s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220728203701-9812 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220728203701-9812 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (46.228341537s)
--- PASS: TestJSONOutput/start/Command (46.23s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220728203701-9812 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220728203701-9812 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (20.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220728203701-9812 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220728203701-9812 --output=json --user=testUser: (20.220182151s)
--- PASS: TestJSONOutput/stop/Command (20.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220728203814-9812 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220728203814-9812 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.876726ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ff35a5e8-f79c-43cd-bfff-fe0b9ced9b49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220728203814-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"20a8adc4-9aaf-4276-9bd1-15ee0c24733d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14555"}}
	{"specversion":"1.0","id":"78612e76-8c40-4acd-8ee7-593166221179","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d4d86666-4a08-4e1a-872d-023273aa6a56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig"}}
	{"specversion":"1.0","id":"2b1eff6e-4626-4e9d-8b60-c9a44428c53a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube"}}
	{"specversion":"1.0","id":"cdc68f51-be95-4496-8227-d707befcbae2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"daef307b-5465-4178-93ab-a43504b51db0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220728203814-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220728203814-9812
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.21s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220728203815-9812 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220728203815-9812 --network=: (33.905400465s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220728203815-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220728203815-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220728203815-9812: (2.273608477s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.21s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220728203851-9812 --network=bridge
E0728 20:38:57.424557    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:38:57.429847    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:38:57.440126    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:38:57.460466    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:38:57.500799    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:38:57.581178    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:38:57.741610    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:38:58.062300    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:38:58.703312    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:38:59.984126    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:39:02.544378    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:39:07.665374    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:39:16.007079    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
E0728 20:39:17.906406    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220728203851-9812 --network=bridge: (29.259860647s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220728203851-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220728203851-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220728203851-9812: (2.149517352s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.45s)

                                                
                                    
x
+
TestKicExistingNetwork (31.12s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220728203923-9812 --network=existing-network
E0728 20:39:38.387341    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:39:43.692406    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220728203923-9812 --network=existing-network: (28.656399107s)
helpers_test.go:175: Cleaning up "existing-network-20220728203923-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220728203923-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220728203923-9812: (2.219727321s)
--- PASS: TestKicExistingNetwork (31.12s)

                                                
                                    
x
+
TestKicCustomSubnet (31.78s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220728203954-9812 --subnet=192.168.60.0/24
E0728 20:40:19.348130    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220728203954-9812 --subnet=192.168.60.0/24: (29.302786082s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220728203954-9812 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220728203954-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220728203954-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220728203954-9812: (2.440539525s)
--- PASS: TestKicCustomSubnet (31.78s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (79.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-20220728204025-9812 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-20220728204025-9812 --driver=docker  --container-runtime=containerd: (36.370253517s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-20220728204025-9812 --driver=docker  --container-runtime=containerd
E0728 20:41:21.459490    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 20:41:21.464768    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 20:41:21.475094    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 20:41:21.495444    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 20:41:21.535913    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 20:41:21.616286    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 20:41:21.776761    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 20:41:22.097121    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 20:41:22.738070    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 20:41:24.018674    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 20:41:26.579340    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 20:41:31.699942    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-20220728204025-9812 --driver=docker  --container-runtime=containerd: (36.995019339s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-20220728204025-9812
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-20220728204025-9812
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220728204025-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-20220728204025-9812
E0728 20:41:41.268330    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:41:41.940419    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-20220728204025-9812: (2.504911773s)
helpers_test.go:175: Cleaning up "first-20220728204025-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-20220728204025-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-20220728204025-9812: (2.573915767s)
--- PASS: TestMinikubeProfile (79.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220728204145-9812 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220728204145-9812 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.82444313s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220728204145-9812 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220728204145-9812 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220728204145-9812 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.737560059s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220728204145-9812 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.96s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220728204145-9812 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220728204145-9812 --alsologtostderr -v=5: (1.96013324s)
--- PASS: TestMountStart/serial/DeleteFirst (1.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220728204145-9812 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220728204145-9812
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220728204145-9812: (1.311068668s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.15s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220728204145-9812
E0728 20:42:02.421581    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220728204145-9812: (6.14804585s)
--- PASS: TestMountStart/serial/RestartStopped (7.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220728204145-9812 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220728204211-9812 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0728 20:42:43.382150    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220728204211-9812 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m43.194642684s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- rollout status deployment/busybox
E0728 20:43:57.425484    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- rollout status deployment/busybox: (2.779787108s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- exec busybox-d46db594c-26vdn -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- exec busybox-d46db594c-tpwnb -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- exec busybox-d46db594c-26vdn -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- exec busybox-d46db594c-tpwnb -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- exec busybox-d46db594c-26vdn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- exec busybox-d46db594c-tpwnb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- exec busybox-d46db594c-26vdn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- exec busybox-d46db594c-26vdn -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- exec busybox-d46db594c-tpwnb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220728204211-9812 -- exec busybox-d46db594c-tpwnb -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (33.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220728204211-9812 -v 3 --alsologtostderr
E0728 20:44:05.303029    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 20:44:16.009005    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
E0728 20:44:25.108954    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220728204211-9812 -v 3 --alsologtostderr: (32.855761571s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (33.71s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 cp testdata/cp-test.txt multinode-20220728204211-9812:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 cp multinode-20220728204211-9812:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2569771978/001/cp-test_multinode-20220728204211-9812.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 cp multinode-20220728204211-9812:/home/docker/cp-test.txt multinode-20220728204211-9812-m02:/home/docker/cp-test_multinode-20220728204211-9812_multinode-20220728204211-9812-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812-m02 "sudo cat /home/docker/cp-test_multinode-20220728204211-9812_multinode-20220728204211-9812-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 cp multinode-20220728204211-9812:/home/docker/cp-test.txt multinode-20220728204211-9812-m03:/home/docker/cp-test_multinode-20220728204211-9812_multinode-20220728204211-9812-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812-m03 "sudo cat /home/docker/cp-test_multinode-20220728204211-9812_multinode-20220728204211-9812-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 cp testdata/cp-test.txt multinode-20220728204211-9812-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 cp multinode-20220728204211-9812-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2569771978/001/cp-test_multinode-20220728204211-9812-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 cp multinode-20220728204211-9812-m02:/home/docker/cp-test.txt multinode-20220728204211-9812:/home/docker/cp-test_multinode-20220728204211-9812-m02_multinode-20220728204211-9812.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812 "sudo cat /home/docker/cp-test_multinode-20220728204211-9812-m02_multinode-20220728204211-9812.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 cp multinode-20220728204211-9812-m02:/home/docker/cp-test.txt multinode-20220728204211-9812-m03:/home/docker/cp-test_multinode-20220728204211-9812-m02_multinode-20220728204211-9812-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812-m03 "sudo cat /home/docker/cp-test_multinode-20220728204211-9812-m02_multinode-20220728204211-9812-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 cp testdata/cp-test.txt multinode-20220728204211-9812-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 cp multinode-20220728204211-9812-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2569771978/001/cp-test_multinode-20220728204211-9812-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 cp multinode-20220728204211-9812-m03:/home/docker/cp-test.txt multinode-20220728204211-9812:/home/docker/cp-test_multinode-20220728204211-9812-m03_multinode-20220728204211-9812.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812 "sudo cat /home/docker/cp-test_multinode-20220728204211-9812-m03_multinode-20220728204211-9812.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 cp multinode-20220728204211-9812-m03:/home/docker/cp-test.txt multinode-20220728204211-9812-m02:/home/docker/cp-test_multinode-20220728204211-9812-m03_multinode-20220728204211-9812-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 ssh -n multinode-20220728204211-9812-m02 "sudo cat /home/docker/cp-test_multinode-20220728204211-9812-m03_multinode-20220728204211-9812-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220728204211-9812 node stop m03: (1.335887398s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220728204211-9812 status: exit status 7 (697.92921ms)

                                                
                                                
-- stdout --
	multinode-20220728204211-9812
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220728204211-9812-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220728204211-9812-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220728204211-9812 status --alsologtostderr: exit status 7 (696.82855ms)

                                                
                                                
-- stdout --
	multinode-20220728204211-9812
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220728204211-9812-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220728204211-9812-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 20:44:51.888348  100018 out.go:296] Setting OutFile to fd 1 ...
	I0728 20:44:51.888512  100018 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:44:51.888524  100018 out.go:309] Setting ErrFile to fd 2...
	I0728 20:44:51.888531  100018 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:44:51.888658  100018 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 20:44:51.888868  100018 out.go:303] Setting JSON to false
	I0728 20:44:51.888897  100018 mustload.go:65] Loading cluster: multinode-20220728204211-9812
	I0728 20:44:51.889277  100018 config.go:178] Loaded profile config "multinode-20220728204211-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 20:44:51.889303  100018 status.go:253] checking status of multinode-20220728204211-9812 ...
	I0728 20:44:51.889728  100018 cli_runner.go:164] Run: docker container inspect multinode-20220728204211-9812 --format={{.State.Status}}
	I0728 20:44:51.927334  100018 status.go:328] multinode-20220728204211-9812 host status = "Running" (err=<nil>)
	I0728 20:44:51.927374  100018 host.go:66] Checking if "multinode-20220728204211-9812" exists ...
	I0728 20:44:51.927627  100018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728204211-9812
	I0728 20:44:51.967123  100018 host.go:66] Checking if "multinode-20220728204211-9812" exists ...
	I0728 20:44:51.967609  100018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 20:44:51.967660  100018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728204211-9812
	I0728 20:44:52.005639  100018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49227 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728204211-9812/id_rsa Username:docker}
	I0728 20:44:52.092162  100018 ssh_runner.go:195] Run: systemctl --version
	I0728 20:44:52.097139  100018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 20:44:52.108671  100018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 20:44:52.230059  100018 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-07-28 20:44:52.142677347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 20:44:52.230629  100018 kubeconfig.go:92] found "multinode-20220728204211-9812" server: "https://192.168.58.2:8443"
	I0728 20:44:52.230657  100018 api_server.go:165] Checking apiserver status ...
	I0728 20:44:52.230688  100018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 20:44:52.240873  100018 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	I0728 20:44:52.249617  100018 api_server.go:181] apiserver freezer: "12:freezer:/docker/d95819764e28e65d772522274b6123364360aeebb800596beb7833f05662322d/kubepods/burstable/pod086f59db949379ebfc78b2930a4e01d5/67382b2faa7175fcdf1344a03756a39cf7028dd73a3ddaa5680cae3dd30751d7"
	I0728 20:44:52.249685  100018 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d95819764e28e65d772522274b6123364360aeebb800596beb7833f05662322d/kubepods/burstable/pod086f59db949379ebfc78b2930a4e01d5/67382b2faa7175fcdf1344a03756a39cf7028dd73a3ddaa5680cae3dd30751d7/freezer.state
	I0728 20:44:52.258319  100018 api_server.go:203] freezer state: "THAWED"
	I0728 20:44:52.258349  100018 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0728 20:44:52.263417  100018 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0728 20:44:52.263448  100018 status.go:419] multinode-20220728204211-9812 apiserver status = Running (err=<nil>)
	I0728 20:44:52.263458  100018 status.go:255] multinode-20220728204211-9812 status: &{Name:multinode-20220728204211-9812 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 20:44:52.263475  100018 status.go:253] checking status of multinode-20220728204211-9812-m02 ...
	I0728 20:44:52.263734  100018 cli_runner.go:164] Run: docker container inspect multinode-20220728204211-9812-m02 --format={{.State.Status}}
	I0728 20:44:52.302408  100018 status.go:328] multinode-20220728204211-9812-m02 host status = "Running" (err=<nil>)
	I0728 20:44:52.302440  100018 host.go:66] Checking if "multinode-20220728204211-9812-m02" exists ...
	I0728 20:44:52.302748  100018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728204211-9812-m02
	I0728 20:44:52.341002  100018 host.go:66] Checking if "multinode-20220728204211-9812-m02" exists ...
	I0728 20:44:52.341349  100018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 20:44:52.341398  100018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728204211-9812-m02
	I0728 20:44:52.379863  100018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49232 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728204211-9812-m02/id_rsa Username:docker}
	I0728 20:44:52.463990  100018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 20:44:52.474752  100018 status.go:255] multinode-20220728204211-9812-m02 status: &{Name:multinode-20220728204211-9812-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0728 20:44:52.474793  100018 status.go:253] checking status of multinode-20220728204211-9812-m03 ...
	I0728 20:44:52.475135  100018 cli_runner.go:164] Run: docker container inspect multinode-20220728204211-9812-m03 --format={{.State.Status}}
	I0728 20:44:52.513437  100018 status.go:328] multinode-20220728204211-9812-m03 host status = "Stopped" (err=<nil>)
	I0728 20:44:52.513464  100018 status.go:341] host is not running, skipping remaining checks
	I0728 20:44:52.513473  100018 status.go:255] multinode-20220728204211-9812-m03 status: &{Name:multinode-20220728204211-9812-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.73s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220728204211-9812 node start m03 --alsologtostderr: (30.544077981s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (156.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220728204211-9812
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220728204211-9812
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220728204211-9812: (41.779875777s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220728204211-9812 --wait=true -v=8 --alsologtostderr
E0728 20:46:21.463022    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
E0728 20:46:49.143633    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220728204211-9812 --wait=true -v=8 --alsologtostderr: (1m54.447649899s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220728204211-9812
--- PASS: TestMultiNode/serial/RestartKeepsNodes (156.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220728204211-9812 node delete m03: (4.701914645s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220728204211-9812 stop: (40.500358384s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220728204211-9812 status: exit status 7 (154.061825ms)

                                                
                                                
-- stdout --
	multinode-20220728204211-9812
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220728204211-9812-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220728204211-9812 status --alsologtostderr: exit status 7 (149.081775ms)

                                                
                                                
-- stdout --
	multinode-20220728204211-9812
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220728204211-9812-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 20:48:46.669916  110659 out.go:296] Setting OutFile to fd 1 ...
	I0728 20:48:46.670071  110659 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:48:46.670081  110659 out.go:309] Setting ErrFile to fd 2...
	I0728 20:48:46.670086  110659 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:48:46.670203  110659 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 20:48:46.670379  110659 out.go:303] Setting JSON to false
	I0728 20:48:46.670404  110659 mustload.go:65] Loading cluster: multinode-20220728204211-9812
	I0728 20:48:46.670790  110659 config.go:178] Loaded profile config "multinode-20220728204211-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 20:48:46.670813  110659 status.go:253] checking status of multinode-20220728204211-9812 ...
	I0728 20:48:46.671287  110659 cli_runner.go:164] Run: docker container inspect multinode-20220728204211-9812 --format={{.State.Status}}
	I0728 20:48:46.709638  110659 status.go:328] multinode-20220728204211-9812 host status = "Stopped" (err=<nil>)
	I0728 20:48:46.709685  110659 status.go:341] host is not running, skipping remaining checks
	I0728 20:48:46.709695  110659 status.go:255] multinode-20220728204211-9812 status: &{Name:multinode-20220728204211-9812 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 20:48:46.709747  110659 status.go:253] checking status of multinode-20220728204211-9812-m02 ...
	I0728 20:48:46.710042  110659 cli_runner.go:164] Run: docker container inspect multinode-20220728204211-9812-m02 --format={{.State.Status}}
	I0728 20:48:46.747542  110659 status.go:328] multinode-20220728204211-9812-m02 host status = "Stopped" (err=<nil>)
	I0728 20:48:46.747574  110659 status.go:341] host is not running, skipping remaining checks
	I0728 20:48:46.747581  110659 status.go:255] multinode-20220728204211-9812-m02 status: &{Name:multinode-20220728204211-9812-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (84.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220728204211-9812 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0728 20:48:57.424814    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 20:49:16.007449    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220728204211-9812 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m23.298459514s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220728204211-9812 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (84.13s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220728204211-9812
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220728204211-9812-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220728204211-9812-m02 --driver=docker  --container-runtime=containerd: exit status 14 (123.338901ms)

                                                
                                                
-- stdout --
	* [multinode-20220728204211-9812-m02] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220728204211-9812-m02' is duplicated with machine name 'multinode-20220728204211-9812-m02' in profile 'multinode-20220728204211-9812'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220728204211-9812-m03 --driver=docker  --container-runtime=containerd
E0728 20:50:39.053495    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220728204211-9812-m03 --driver=docker  --container-runtime=containerd: (37.239623918s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220728204211-9812
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220728204211-9812: exit status 80 (408.663724ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220728204211-9812
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220728204211-9812-m03 already exists in multinode-20220728204211-9812-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220728204211-9812-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220728204211-9812-m03: (2.524113651s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.37s)

                                                
                                    
x
+
TestPreload (118.04s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220728205055-9812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
E0728 20:51:21.460172    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220728205055-9812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m7.566026506s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220728205055-9812 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220728205055-9812 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (2.06124973s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220728205055-9812 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220728205055-9812 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (45.070750198s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220728205055-9812 -- sudo crictl image ls
helpers_test.go:175: Cleaning up "test-preload-20220728205055-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220728205055-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220728205055-9812: (2.875039363s)
--- PASS: TestPreload (118.04s)

                                                
                                    
x
+
TestScheduledStopUnix (114.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220728205253-9812 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220728205253-9812 --memory=2048 --driver=docker  --container-runtime=containerd: (37.579555848s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220728205253-9812 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220728205253-9812 -n scheduled-stop-20220728205253-9812
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220728205253-9812 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220728205253-9812 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220728205253-9812 -n scheduled-stop-20220728205253-9812
E0728 20:53:57.424721    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220728205253-9812
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220728205253-9812 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0728 20:54:16.008849    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220728205253-9812
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220728205253-9812: exit status 7 (108.132309ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220728205253-9812
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220728205253-9812 -n scheduled-stop-20220728205253-9812
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220728205253-9812 -n scheduled-stop-20220728205253-9812: exit status 7 (105.333986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220728205253-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220728205253-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220728205253-9812: (5.25600325s)
--- PASS: TestScheduledStopUnix (114.85s)

                                                
                                    
x
+
TestInsufficientStorage (16.93s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220728205448-9812 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220728205448-9812 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.095046017s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3a1aefb6-553d-44ea-88c2-20a7c6cf00bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220728205448-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d8679d2f-2163-4d40-afac-6d8cc7bed078","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14555"}}
	{"specversion":"1.0","id":"3cbefaea-874a-45d3-a191-2276fa3530c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"376e9830-0272-4c3c-a23e-261fd3ddf099","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig"}}
	{"specversion":"1.0","id":"530ec91a-1174-4d4b-8209-d6a4422a5a6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube"}}
	{"specversion":"1.0","id":"118c469b-0e5d-47a2-b48a-84224be61049","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7d51dd52-0f6a-4495-a5d9-1d7644f74a18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"08a3af3b-79b8-4c87-aa86-a72cfac5f4fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0a608aa2-ae72-4583-9cc8-4811208560c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"22d5f2f5-a0a9-4068-99c9-0e10203dc408","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"9ebd7f80-05ec-4d73-95ac-9e756d430a3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220728205448-9812 in cluster insufficient-storage-20220728205448-9812","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1793089-8c68-4a71-9bcf-63c1a9d7cdd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"69d8a282-f3ac-4a77-a74c-504754f40da7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd1ae7f6-f500-4d6d-bff2-dd0ca84663f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220728205448-9812 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220728205448-9812 --output=json --layout=cluster: exit status 7 (386.056434ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220728205448-9812","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220728205448-9812","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 20:54:59.034450  131336 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220728205448-9812" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220728205448-9812 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220728205448-9812 --output=json --layout=cluster: exit status 7 (386.576802ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220728205448-9812","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220728205448-9812","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 20:54:59.420889  131447 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220728205448-9812" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	E0728 20:54:59.430266  131447 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/insufficient-storage-20220728205448-9812/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220728205448-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220728205448-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220728205448-9812: (6.064241075s)
--- PASS: TestInsufficientStorage (16.93s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (87.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.3496939193.exe start -p running-upgrade-20220728205652-9812 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.3496939193.exe start -p running-upgrade-20220728205652-9812 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (40.215486504s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220728205652-9812 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0728 20:57:44.504241    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220728205652-9812 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.259826776s)
helpers_test.go:175: Cleaning up "running-upgrade-20220728205652-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220728205652-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220728205652-9812: (3.978123059s)
--- PASS: TestRunningBinaryUpgrade (87.89s)

                                                
                                    
x
+
TestMissingContainerUpgrade (166.23s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.3526458417.exe start -p missing-upgrade-20220728205505-9812 --memory=2200 --driver=docker  --container-runtime=containerd
E0728 20:55:20.471026    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.3526458417.exe start -p missing-upgrade-20220728205505-9812 --memory=2200 --driver=docker  --container-runtime=containerd: (1m25.355917232s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220728205505-9812

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220728205505-9812: (12.611951986s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220728205505-9812
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220728205505-9812 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220728205505-9812 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.367684143s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220728205505-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220728205505-9812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220728205505-9812: (5.162181685s)
--- PASS: TestMissingContainerUpgrade (166.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220728205505-9812 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220728205505-9812 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (97.565124ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220728205505-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (61.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220728205505-9812 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220728205505-9812 --driver=docker  --container-runtime=containerd: (1m0.630250274s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220728205505-9812 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (61.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (141.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.3384229472.exe start -p stopped-upgrade-20220728205505-9812 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.3384229472.exe start -p stopped-upgrade-20220728205505-9812 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m1.299080284s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.3384229472.exe -p stopped-upgrade-20220728205505-9812 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.3384229472.exe -p stopped-upgrade-20220728205505-9812 stop: (3.353758161s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220728205505-9812 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0728 20:56:21.460063    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220728205505-9812 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m17.027860262s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (141.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220728205505-9812 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220728205505-9812 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.859502368s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220728205505-9812 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220728205505-9812 status -o json: exit status 2 (418.111752ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220728205505-9812","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220728205505-9812
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220728205505-9812: (2.223015366s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220728205505-9812 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220728205505-9812 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.298423788s)
--- PASS: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220728205505-9812 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220728205505-9812 "sudo systemctl is-active --quiet service kubelet": exit status 1 (468.156877ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (8.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (4.004245857s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (4.42876411s)
--- PASS: TestNoKubernetes/serial/ProfileList (8.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (4.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220728205505-9812
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220728205505-9812: (4.428956942s)
--- PASS: TestNoKubernetes/serial/Stop (4.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220728205505-9812 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220728205505-9812 --driver=docker  --container-runtime=containerd: (6.401344767s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220728205505-9812 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220728205505-9812 "sudo systemctl is-active --quiet service kubelet": exit status 1 (401.503059ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220728205505-9812
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20220728205505-9812: (1.034923563s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestPause/serial/Start (65.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220728205731-9812 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220728205731-9812 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m5.316351591s)
--- PASS: TestPause/serial/Start (65.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220728205821-9812 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20220728205821-9812 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (320.546531ms)

                                                
                                                
-- stdout --
	* [false-20220728205821-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 20:58:21.327846  172049 out.go:296] Setting OutFile to fd 1 ...
	I0728 20:58:21.328140  172049 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:58:21.328175  172049 out.go:309] Setting ErrFile to fd 2...
	I0728 20:58:21.328191  172049 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 20:58:21.328379  172049 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 20:58:21.329169  172049 out.go:303] Setting JSON to false
	I0728 20:58:21.331893  172049 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2451,"bootTime":1659039450,"procs":1103,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0728 20:58:21.331993  172049 start.go:125] virtualization: kvm guest
	I0728 20:58:21.334987  172049 out.go:177] * [false-20220728205821-9812] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0728 20:58:21.336951  172049 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 20:58:21.336990  172049 notify.go:193] Checking for updates...
	I0728 20:58:21.339864  172049 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 20:58:21.341965  172049 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 20:58:21.343601  172049 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 20:58:21.348071  172049 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0728 20:58:21.350451  172049 config.go:178] Loaded profile config "force-systemd-env-20220728205751-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 20:58:21.350610  172049 config.go:178] Loaded profile config "kubernetes-upgrade-20220728205630-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 20:58:21.350749  172049 config.go:178] Loaded profile config "pause-20220728205731-9812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0728 20:58:21.350817  172049 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 20:58:21.396835  172049 docker.go:137] docker version: linux-20.10.17
	I0728 20:58:21.396961  172049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 20:58:21.556864  172049 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:57 SystemTime:2022-07-28 20:58:21.456702855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 20:58:21.557017  172049 docker.go:254] overlay module found
	I0728 20:58:21.559516  172049 out.go:177] * Using the docker driver based on user configuration
	I0728 20:58:21.561029  172049 start.go:284] selected driver: docker
	I0728 20:58:21.561049  172049 start.go:808] validating driver "docker" against <nil>
	I0728 20:58:21.561079  172049 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 20:58:21.563912  172049 out.go:177] 
	W0728 20:58:21.565481  172049 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0728 20:58:21.567093  172049 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-20220728205821-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20220728205821-9812
--- PASS: TestNetworkPlugins/group/false (0.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (16.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220728205731-9812 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220728205731-9812 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.41754022s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (16.43s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220728205731-9812 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220728205731-9812 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220728205731-9812 --output=json --layout=cluster: exit status 2 (473.242227ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220728205731-9812","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220728205731-9812","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.47s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220728205731-9812 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.96s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220728205731-9812 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.96s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.96s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220728205731-9812 --alsologtostderr -v=5
E0728 20:58:57.424786    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220728205731-9812 --alsologtostderr -v=5: (2.964605348s)
--- PASS: TestPause/serial/DeletePaused (2.96s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.93s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220728205731-9812
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220728205731-9812: exit status 1 (37.117056ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220728205731-9812

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (125.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220728205919-9812 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220728205919-9812 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m5.698614508s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (125.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220728205940-9812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220728205940-9812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (52.377378488s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220728205940-9812 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [515b7f61-f14f-455c-a3ac-2f8e9838ca96] Pending
helpers_test.go:342: "busybox" [515b7f61-f14f-455c-a3ac-2f8e9838ca96] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [515b7f61-f14f-455c-a3ac-2f8e9838ca96] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.015654629s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220728205940-9812 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220728205940-9812 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220728205940-9812 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220728205940-9812 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220728205940-9812 --alsologtostderr -v=3: (20.373938085s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220728205940-9812 -n no-preload-20220728205940-9812
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220728205940-9812 -n no-preload-20220728205940-9812: exit status 7 (116.054042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220728205940-9812 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (313.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220728205940-9812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3
E0728 21:01:21.459951    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220728205940-9812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (5m13.051519228s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220728205940-9812 -n no-preload-20220728205940-9812
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (313.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220728205919-9812 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [44dc46be-31eb-4a23-be29-48553ea56552] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [44dc46be-31eb-4a23-be29-48553ea56552] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.014132212s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220728205919-9812 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220728205919-9812 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220728205919-9812 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220728205919-9812 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220728205919-9812 --alsologtostderr -v=3: (20.39010086s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220728205919-9812 -n old-k8s-version-20220728205919-9812
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220728205919-9812 -n old-k8s-version-20220728205919-9812: exit status 7 (123.506884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220728205919-9812 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (436.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220728205919-9812 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220728205919-9812 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m16.30405033s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220728205919-9812 -n old-k8s-version-20220728205919-9812
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (436.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (59.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220728210213-9812 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220728210213-9812 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (59.608733144s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (59.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220728210213-9812 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [612d3f78-5681-48e1-a039-b3f0bded414a] Pending
helpers_test.go:342: "busybox" [612d3f78-5681-48e1-a039-b3f0bded414a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [612d3f78-5681-48e1-a039-b3f0bded414a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.013917322s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220728210213-9812 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220728210213-9812 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220728210213-9812 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220728210213-9812 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220728210213-9812 --alsologtostderr -v=3: (20.341375538s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220728210213-9812 -n default-k8s-different-port-20220728210213-9812
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220728210213-9812 -n default-k8s-different-port-20220728210213-9812: exit status 7 (111.206604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220728210213-9812 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (310.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220728210213-9812 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3
E0728 21:03:57.424637    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory
E0728 21:04:16.007462    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220728210213-9812 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (5m10.078765071s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220728210213-9812 -n default-k8s-different-port-20220728210213-9812
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (310.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220728210614-9812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220728210614-9812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (51.220054929s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (19.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-dd99g" [214ca92c-dbec-47a5-b20c-bcfcbc0e68b2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0728 21:06:21.460003    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728203447-9812/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-dd99g" [214ca92c-dbec-47a5-b20c-bcfcbc0e68b2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.01537751s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (19.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-dd99g" [214ca92c-dbec-47a5-b20c-bcfcbc0e68b2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008400046s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-20220728205940-9812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220728205940-9812 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220728205940-9812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220728205940-9812 -n no-preload-20220728205940-9812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220728205940-9812 -n no-preload-20220728205940-9812: exit status 2 (480.766901ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220728205940-9812 -n no-preload-20220728205940-9812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220728205940-9812 -n no-preload-20220728205940-9812: exit status 2 (483.029405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220728205940-9812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220728205940-9812 -n no-preload-20220728205940-9812
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220728205940-9812 -n no-preload-20220728205940-9812
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (58.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220728210649-9812 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220728210649-9812 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (58.846213822s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (58.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220728210614-9812 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220728210614-9812 --alsologtostderr -v=3
E0728 21:07:19.054591    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728202716-9812/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220728210614-9812 --alsologtostderr -v=3: (20.423683787s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220728210614-9812 -n newest-cni-20220728210614-9812
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220728210614-9812 -n newest-cni-20220728210614-9812: exit status 7 (121.00302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220728210614-9812 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220728210614-9812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220728210614-9812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (31.736924407s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220728210614-9812 -n newest-cni-20220728210614-9812
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220728210649-9812 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [5a8eb18f-622c-4257-bbb0-1bbb91f4d690] Pending
helpers_test.go:342: "busybox" [5a8eb18f-622c-4257-bbb0-1bbb91f4d690] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [5a8eb18f-622c-4257-bbb0-1bbb91f4d690] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.013060534s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220728210649-9812 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220728210649-9812 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220728210649-9812 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.034045958s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220728210649-9812 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220728210649-9812 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220728210649-9812 --alsologtostderr -v=3: (20.582668134s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220728210614-9812 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220728210614-9812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220728210614-9812 -n newest-cni-20220728210614-9812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220728210614-9812 -n newest-cni-20220728210614-9812: exit status 2 (428.671088ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220728210614-9812 -n newest-cni-20220728210614-9812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220728210614-9812 -n newest-cni-20220728210614-9812: exit status 2 (429.020433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220728210614-9812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220728210614-9812 -n newest-cni-20220728210614-9812
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220728210614-9812 -n newest-cni-20220728210614-9812
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220728205820-9812 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220728205820-9812 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (48.042745988s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220728210649-9812 -n embed-certs-20220728210649-9812
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220728210649-9812 -n embed-certs-20220728210649-9812: exit status 7 (109.465212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220728210649-9812 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (557.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220728210649-9812 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220728210649-9812 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (9m17.36733731s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220728210649-9812 -n embed-certs-20220728210649-9812
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (557.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-fpqxr" [0a48f312-3fdf-49d1-bb2f-db8051c5feb0] Pending
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-fpqxr" [0a48f312-3fdf-49d1-bb2f-db8051c5feb0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-fpqxr" [0a48f312-3fdf-49d1-bb2f-db8051c5feb0] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.014386206s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (11.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220728205820-9812 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220728205820-9812 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-g8z2c" [a8f4c245-6ad4-4b2a-885b-47824e355ef9] Pending
helpers_test.go:342: "netcat-869c55b6dc-g8z2c" [a8f4c245-6ad4-4b2a-885b-47824e355ef9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0728 21:08:57.424974    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728203204-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-g8z2c" [a8f4c245-6ad4-4b2a-885b-47824e355ef9] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.007737596s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-fpqxr" [0a48f312-3fdf-49d1-bb2f-db8051c5feb0] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007038053s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-different-port-20220728210213-9812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220728205820-9812 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220728205820-9812 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220728205820-9812 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220728205821-9812 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220728205821-9812 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m2.619161375s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220728210213-9812 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220728210213-9812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220728210213-9812 -n default-k8s-different-port-20220728210213-9812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220728210213-9812 -n default-k8s-different-port-20220728210213-9812: exit status 2 (448.317014ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220728210213-9812 -n default-k8s-different-port-20220728210213-9812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220728210213-9812 -n default-k8s-different-port-20220728210213-9812: exit status 2 (466.470082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20220728210213-9812 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220728210213-9812 -n default-k8s-different-port-20220728210213-9812

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220728210213-9812 -n default-k8s-different-port-20220728210213-9812
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (4.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-p6pnc" [8b787947-cf9f-4242-ba81-445ea0cfe6c7] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014521057s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-p6pnc" [8b787947-cf9f-4242-ba81-445ea0cfe6c7] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006024278s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220728205919-9812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (78.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220728205822-9812 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220728205822-9812 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m18.426535506s)
--- PASS: TestNetworkPlugins/group/cilium/Start (78.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220728205919-9812 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220728205919-9812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-20220728205919-9812 --alsologtostderr -v=1: (1.523834968s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220728205919-9812 -n old-k8s-version-20220728205919-9812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220728205919-9812 -n old-k8s-version-20220728205919-9812: exit status 2 (419.715645ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220728205919-9812 -n old-k8s-version-20220728205919-9812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220728205919-9812 -n old-k8s-version-20220728205919-9812: exit status 2 (423.839787ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220728205919-9812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-20220728205919-9812 --alsologtostderr -v=1: (1.31037136s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220728205919-9812 -n old-k8s-version-20220728205919-9812
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220728205919-9812 -n old-k8s-version-20220728205919-9812
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-5s9rh" [4bb3653d-53e8-4164-88a5-fbab8b688614] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.022458722s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220728205821-9812 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220728205821-9812 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-jd6pk" [0607fa9e-775b-4284-a490-60c7fc6dd9a7] Pending
helpers_test.go:342: "netcat-869c55b6dc-jd6pk" [0607fa9e-775b-4284-a490-60c7fc6dd9a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-jd6pk" [0607fa9e-775b-4284-a490-60c7fc6dd9a7] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006824314s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-d4bv4" [5e36d1c0-80bd-42b6-ba7f-35d09175f500] Running
E0728 21:10:38.444724    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.017155894s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220728205822-9812 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (10.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220728205822-9812 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-4hkgb" [9d729ecd-8bb5-4151-8df3-307e539cee79] Pending
E0728 21:10:43.565673    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728205940-9812/client.crt: no such file or directory
helpers_test.go:342: "netcat-869c55b6dc-4hkgb" [9d729ecd-8bb5-4151-8df3-307e539cee79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-4hkgb" [9d729ecd-8bb5-4151-8df3-307e539cee79] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.006455163s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (10.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220728205822-9812 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220728205822-9812 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220728205822-9812 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (40.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220728205820-9812 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220728205820-9812 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (40.666825268s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (40.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220728205820-9812 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220728205820-9812 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-tftlp" [12fbe14e-896f-4fe4-8d11-59aa337efea0] Pending
helpers_test.go:342: "netcat-869c55b6dc-tftlp" [12fbe14e-896f-4fe4-8d11-59aa337efea0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-tftlp" [12fbe14e-896f-4fe4-8d11-59aa337efea0] Running
E0728 21:11:45.571175    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.006278561s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (286.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220728205820-9812 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd
E0728 21:16:40.172049    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728205820-9812/client.crt: no such file or directory
E0728 21:16:52.773558    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728205919-9812/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220728205820-9812 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (4m46.01789755s)
--- PASS: TestNetworkPlugins/group/bridge/Start (286.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-xp2xg" [d60a4605-4d5c-4853-9def-a4b9a71dbbcf] Running
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-xp2xg" [d60a4605-4d5c-4853-9def-a4b9a71dbbcf] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010919942s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-xp2xg" [d60a4605-4d5c-4853-9def-a4b9a71dbbcf] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00622207s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-20220728210649-9812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220728210649-9812 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220728210649-9812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220728210649-9812 -n embed-certs-20220728210649-9812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220728210649-9812 -n embed-certs-20220728210649-9812: exit status 2 (394.33658ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220728210649-9812 -n embed-certs-20220728210649-9812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220728210649-9812 -n embed-certs-20220728210649-9812: exit status 2 (390.961869ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220728210649-9812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220728210649-9812 -n embed-certs-20220728210649-9812
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220728210649-9812 -n embed-certs-20220728210649-9812
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)
E0728 21:18:13.053665    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728210213-9812/client.crt: no such file or directory
E0728 21:18:20.775089    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728205822-9812/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220728205820-9812 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220728205820-9812 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-lv98t" [c1a69d00-cef7-465f-9908-68118465d6d6] Pending
helpers_test.go:342: "netcat-869c55b6dc-lv98t" [c1a69d00-cef7-465f-9908-68118465d6d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-lv98t" [c1a69d00-cef7-465f-9908-68118465d6d6] Running
E0728 21:21:32.834926    9812 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14555-3256-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728205821-9812/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005571377s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                    

Test skip (23/273)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.24.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.24.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.24.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:455: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220728210648-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220728210648-9812
--- SKIP: TestStartStop/group/disable-driver-mounts (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-20220728205820-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20220728205820-9812
--- SKIP: TestNetworkPlugins/group/kubenet (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220728205820-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220728205820-9812
--- SKIP: TestNetworkPlugins/group/flannel (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220728205821-9812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-20220728205821-9812
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.34s)

                                                
                                    
Copied to clipboard