Test Report: Docker_Linux_containerd 14695

                    
                      16c8c96838ca145d17ecca8303180c41961a99dd:2022-08-01:25115
                    
                

Test fail (5/275)

Order failed test Duration
71 TestFunctional/serial/LogsFileCmd 1.14
211 TestKubernetesUpgrade 577.83
312 TestNetworkPlugins/group/calico/Start 527.51
329 TestNetworkPlugins/group/bridge/DNS 352.96
332 TestNetworkPlugins/group/enable-default-cni/DNS 363.16
x
+
TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 logs --file /tmp/TestFunctionalserialLogsFileCmd1332396266/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 logs --file /tmp/TestFunctionalserialLogsFileCmd1332396266/001/logs.txt: (1.140125012s)
functional_test.go:1247: expected empty minikube logs output, but got: 
***
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 22:53:01.305948   39981 logs.go:192] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 58fba6a5fb41ae46282c5c86e00afc9c1bf33fc85dbb2bb0cb7ec7bbc4103e74" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 58fba6a5fb41ae46282c5c86e00afc9c1bf33fc85dbb2bb0cb7ec7bbc4103e74": Process exited with status 1
	stdout:
	
	stderr:
	time="2022-08-01T22:53:01Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_etcd-functional-20220801225035-9849_f461fd147446615638780d3f8f40ae7a/etcd/1.log\": lstat /var/log/pods/kube-system_etcd-functional-20220801225035-9849_f461fd147446615638780d3f8f40ae7a/etcd/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2022-08-01T22:53:01Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_etcd-functional-20220801225035-9849_f461fd147446615638780d3f8f40ae7a/etcd/1.log\\\": lstat /var/log/pods/kube-system_etcd-functional-20220801225035-9849_f461fd147446615638780d3f8f40ae7a/etcd/1.log: no such file or directory\"\n\n** /stderr **"
	E0801 22:53:01.430281   39981 logs.go:192] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 a3b4f2549d28eaa490068c946c1be1ff2865e28041d59a53652ecd3a7a5cfb39" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 a3b4f2549d28eaa490068c946c1be1ff2865e28041d59a53652ecd3a7a5cfb39": Process exited with status 1
	stdout:
	
	stderr:
	time="2022-08-01T22:53:01Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-functional-20220801225035-9849_6677733d7b9d9884d9e1bf42664ba2d2/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-functional-20220801225035-9849_6677733d7b9d9884d9e1bf42664ba2d2/kube-controller-manager/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2022-08-01T22:53:01Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-controller-manager-functional-20220801225035-9849_6677733d7b9d9884d9e1bf42664ba2d2/kube-controller-manager/1.log\\\": lstat /var/log/pods/kube-system_kube-controller-manager-functional-20220801225035-9849_6677733d7b9d9884d9e1bf42664ba2d2/kube-controller-manager/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: etcd [58fba6a5fb41ae46282c5c86e00afc9c1bf33fc85dbb2bb0cb7ec7bbc4103e74], kube-controller-manager [a3b4f2549d28eaa490068c946c1be1ff2865e28041d59a53652ecd3a7a5cfb39]

                                                
                                                
** /stderr *****
--- FAIL: TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (577.83s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220801231451-9849 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0801 23:15:16.350584    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220801231451-9849 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (57.634311958s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220801231451-9849

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220801231451-9849: (1.480726666s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220801231451-9849 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220801231451-9849 status --format={{.Host}}: exit status 7 (140.953228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220801231451-9849 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220801231451-9849 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (8m34.60066812s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220801231451-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14695
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-20220801231451-9849 in cluster kubernetes-upgrade-20220801231451-9849
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-20220801231451-9849" ...
	* Preparing Kubernetes v1.24.3 on containerd 1.6.6 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Aug 01 23:24:24 kubernetes-upgrade-20220801231451-9849 kubelet[11658]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 23:15:50.937974  164558 out.go:296] Setting OutFile to fd 1 ...
	I0801 23:15:50.938123  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:15:50.938134  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:15:50.938141  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:15:50.938262  164558 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 23:15:50.938863  164558 out.go:303] Setting JSON to false
	I0801 23:15:50.940222  164558 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3501,"bootTime":1659392250,"procs":720,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0801 23:15:50.940289  164558 start.go:125] virtualization: kvm guest
	I0801 23:15:51.033527  164558 out.go:177] * [kubernetes-upgrade-20220801231451-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0801 23:15:51.159885  164558 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 23:15:51.133961  164558 notify.go:193] Checking for updates...
	I0801 23:15:51.514016  164558 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 23:15:51.679477  164558 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 23:15:51.689468  164558 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 23:15:51.692400  164558 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0801 23:15:51.694929  164558 config.go:180] Loaded profile config "kubernetes-upgrade-20220801231451-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0801 23:15:51.695377  164558 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 23:15:51.767550  164558 docker.go:137] docker version: linux-20.10.17
	I0801 23:15:51.767687  164558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 23:15:51.911277  164558 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:66 SystemTime:2022-08-01 23:15:51.806960708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 23:15:51.911445  164558 docker.go:254] overlay module found
	I0801 23:15:52.076652  164558 out.go:177] * Using the docker driver based on existing profile
	I0801 23:15:52.175863  164558 start.go:284] selected driver: docker
	I0801 23:15:52.175890  164558 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-20220801231451-9849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220801231451-9
849 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 23:15:52.176037  164558 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 23:15:52.177165  164558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 23:15:52.295640  164558 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:66 SystemTime:2022-08-01 23:15:52.2114529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 23:15:52.295900  164558 cni.go:95] Creating CNI manager for ""
	I0801 23:15:52.295923  164558 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0801 23:15:52.295937  164558 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220801231451-9849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220801231451-9849 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 23:15:52.332248  164558 out.go:177] * Starting control plane node kubernetes-upgrade-20220801231451-9849 in cluster kubernetes-upgrade-20220801231451-9849
	I0801 23:15:52.340534  164558 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0801 23:15:52.349571  164558 out.go:177] * Pulling base image ...
	I0801 23:15:52.357408  164558 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0801 23:15:52.357467  164558 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4
	I0801 23:15:52.357496  164558 cache.go:57] Caching tarball of preloaded images
	I0801 23:15:52.357498  164558 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 23:15:52.357839  164558 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 23:15:52.357872  164558 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on containerd
	I0801 23:15:52.358065  164558 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801231451-9849/config.json ...
	I0801 23:15:52.398616  164558 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 23:15:52.398656  164558 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 23:15:52.398677  164558 cache.go:208] Successfully downloaded all kic artifacts
	I0801 23:15:52.398746  164558 start.go:371] acquiring machines lock for kubernetes-upgrade-20220801231451-9849: {Name:mk268043677f6e46f3017b56aaa74473bf92fded Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 23:15:52.398928  164558 start.go:375] acquired machines lock for "kubernetes-upgrade-20220801231451-9849" in 150.262µs
	I0801 23:15:52.398952  164558 start.go:95] Skipping create...Using existing machine configuration
	I0801 23:15:52.398957  164558 fix.go:55] fixHost starting: 
	I0801 23:15:52.399420  164558 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220801231451-9849 --format={{.State.Status}}
	I0801 23:15:52.501885  164558 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220801231451-9849: state=Stopped err=<nil>
	W0801 23:15:52.501931  164558 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 23:15:52.504870  164558 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220801231451-9849" ...
	I0801 23:15:52.510957  164558 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220801231451-9849
	I0801 23:15:53.122031  164558 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220801231451-9849 --format={{.State.Status}}
	I0801 23:15:53.166631  164558 kic.go:415] container "kubernetes-upgrade-20220801231451-9849" state is running.
	I0801 23:15:53.167107  164558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220801231451-9849
	I0801 23:15:53.207932  164558 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801231451-9849/config.json ...
	I0801 23:15:53.208157  164558 machine.go:88] provisioning docker machine ...
	I0801 23:15:53.208194  164558 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220801231451-9849"
	I0801 23:15:53.208250  164558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801231451-9849
	I0801 23:15:53.256927  164558 main.go:134] libmachine: Using SSH client type: native
	I0801 23:15:53.257131  164558 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil>  [] 0s} 127.0.0.1 49337 <nil> <nil>}
	I0801 23:15:53.257155  164558 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220801231451-9849 && echo "kubernetes-upgrade-20220801231451-9849" | sudo tee /etc/hostname
	I0801 23:15:53.257881  164558 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39554->127.0.0.1:49337: read: connection reset by peer
	I0801 23:15:56.474919  164558 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220801231451-9849
	
	I0801 23:15:56.474999  164558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801231451-9849
	I0801 23:15:56.511246  164558 main.go:134] libmachine: Using SSH client type: native
	I0801 23:15:56.511444  164558 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil>  [] 0s} 127.0.0.1 49337 <nil> <nil>}
	I0801 23:15:56.511477  164558 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220801231451-9849' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220801231451-9849/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220801231451-9849' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 23:15:56.626031  164558 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 23:15:56.626061  164558 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 23:15:56.626105  164558 ubuntu.go:177] setting up certificates
	I0801 23:15:56.626116  164558 provision.go:83] configureAuth start
	I0801 23:15:56.626171  164558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220801231451-9849
	I0801 23:15:56.659370  164558 provision.go:138] copyHostCerts
	I0801 23:15:56.659441  164558 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 23:15:56.659451  164558 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 23:15:56.659523  164558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 23:15:56.659628  164558 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 23:15:56.659643  164558 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 23:15:56.659691  164558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 23:15:56.659865  164558 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 23:15:56.659881  164558 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 23:15:56.659957  164558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1675 bytes)
	I0801 23:15:56.660042  164558 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220801231451-9849 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220801231451-9849]
	I0801 23:15:56.832975  164558 provision.go:172] copyRemoteCerts
	I0801 23:15:56.833054  164558 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 23:15:56.833106  164558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801231451-9849
	I0801 23:15:56.866311  164558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801231451-9849/id_rsa Username:docker}
	I0801 23:15:56.953580  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 23:15:56.999186  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0801 23:15:57.085421  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0801 23:15:57.106442  164558 provision.go:86] duration metric: configureAuth took 480.310059ms
	I0801 23:15:57.106470  164558 ubuntu.go:193] setting minikube options for container-runtime
	I0801 23:15:57.106691  164558 config.go:180] Loaded profile config "kubernetes-upgrade-20220801231451-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 23:15:57.106705  164558 machine.go:91] provisioned docker machine in 3.898531696s
	I0801 23:15:57.106713  164558 start.go:307] post-start starting for "kubernetes-upgrade-20220801231451-9849" (driver="docker")
	I0801 23:15:57.106722  164558 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 23:15:57.106770  164558 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 23:15:57.106815  164558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801231451-9849
	I0801 23:15:57.141138  164558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801231451-9849/id_rsa Username:docker}
	I0801 23:15:57.225569  164558 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 23:15:57.228218  164558 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 23:15:57.228244  164558 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 23:15:57.228257  164558 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 23:15:57.228265  164558 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 23:15:57.228277  164558 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 23:15:57.228348  164558 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 23:15:57.228438  164558 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/98492.pem -> 98492.pem in /etc/ssl/certs
	I0801 23:15:57.228533  164558 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 23:15:57.235105  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/98492.pem --> /etc/ssl/certs/98492.pem (1708 bytes)
	I0801 23:15:57.264500  164558 start.go:310] post-start completed in 157.770872ms
	I0801 23:15:57.264592  164558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 23:15:57.264649  164558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801231451-9849
	I0801 23:15:57.302246  164558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801231451-9849/id_rsa Username:docker}
	I0801 23:15:57.382724  164558 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 23:15:57.386648  164558 fix.go:57] fixHost completed within 4.987684735s
	I0801 23:15:57.386669  164558 start.go:82] releasing machines lock for "kubernetes-upgrade-20220801231451-9849", held for 4.987727525s
	I0801 23:15:57.386755  164558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220801231451-9849
	I0801 23:15:57.419657  164558 ssh_runner.go:195] Run: systemctl --version
	I0801 23:15:57.419698  164558 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 23:15:57.419709  164558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801231451-9849
	I0801 23:15:57.419764  164558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220801231451-9849
	I0801 23:15:57.458895  164558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801231451-9849/id_rsa Username:docker}
	I0801 23:15:57.458922  164558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/kubernetes-upgrade-20220801231451-9849/id_rsa Username:docker}
	I0801 23:15:57.538332  164558 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0801 23:15:57.566632  164558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 23:15:57.577860  164558 docker.go:188] disabling docker service ...
	I0801 23:15:57.577915  164558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0801 23:15:57.587600  164558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0801 23:15:57.596334  164558 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0801 23:15:57.667350  164558 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0801 23:15:57.746010  164558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0801 23:15:57.754661  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 23:15:57.790685  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0801 23:15:57.895483  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0801 23:15:57.995820  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0801 23:15:58.059990  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0801 23:15:58.164062  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0801 23:15:58.263546  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0801 23:15:58.363410  164558 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0801 23:15:58.370835  164558 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0801 23:15:58.377114  164558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 23:15:58.456551  164558 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0801 23:15:58.530673  164558 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0801 23:15:58.530746  164558 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0801 23:15:58.534269  164558 start.go:471] Will wait 60s for crictl version
	I0801 23:15:58.534324  164558 ssh_runner.go:195] Run: sudo crictl version
	I0801 23:15:58.558483  164558 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-08-01T23:15:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0801 23:16:09.607985  164558 ssh_runner.go:195] Run: sudo crictl version
	I0801 23:16:09.634642  164558 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0801 23:16:09.634711  164558 ssh_runner.go:195] Run: containerd --version
	I0801 23:16:09.668828  164558 ssh_runner.go:195] Run: containerd --version
	I0801 23:16:09.700186  164558 out.go:177] * Preparing Kubernetes v1.24.3 on containerd 1.6.6 ...
	I0801 23:16:09.701503  164558 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220801231451-9849 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0801 23:16:09.739769  164558 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0801 23:16:09.743294  164558 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 23:16:09.755402  164558 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0801 23:16:09.757592  164558 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0801 23:16:09.757664  164558 ssh_runner.go:195] Run: sudo crictl images --output json
	I0801 23:16:09.783525  164558 containerd.go:543] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.3". assuming images are not preloaded.
	I0801 23:16:09.783584  164558 ssh_runner.go:195] Run: which lz4
	I0801 23:16:09.786723  164558 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0801 23:16:09.789675  164558 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0801 23:16:09.789704  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (447643024 bytes)
	I0801 23:16:10.690904  164558 containerd.go:490] Took 0.904208 seconds to copy over tarball
	I0801 23:16:10.690983  164558 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0801 23:16:13.037390  164558 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.346376661s)
	I0801 23:16:13.037433  164558 containerd.go:497] Took 2.346497 seconds t extract the tarball
	I0801 23:16:13.037444  164558 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0801 23:16:13.091994  164558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 23:16:13.177516  164558 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0801 23:16:13.260028  164558 ssh_runner.go:195] Run: sudo crictl images --output json
	I0801 23:16:13.287812  164558 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.3 k8s.gcr.io/kube-controller-manager:v1.24.3 k8s.gcr.io/kube-scheduler:v1.24.3 k8s.gcr.io/kube-proxy:v1.24.3 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0801 23:16:13.287899  164558 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 23:16:13.287916  164558 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
	I0801 23:16:13.287937  164558 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.3
	I0801 23:16:13.287957  164558 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.3
	I0801 23:16:13.287965  164558 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0801 23:16:13.288017  164558 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
	I0801 23:16:13.287900  164558 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.3
	I0801 23:16:13.287920  164558 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.3
	I0801 23:16:13.289039  164558 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
	I0801 23:16:13.289042  164558 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.3: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.3
	I0801 23:16:13.289042  164558 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.3: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.3
	I0801 23:16:13.289041  164558 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.3: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.3
	I0801 23:16:13.289102  164558 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 23:16:13.289113  164558 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.3: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.3
	I0801 23:16:13.289041  164558 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
	I0801 23:16:13.289390  164558 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0801 23:16:13.698595  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
	I0801 23:16:13.724459  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.3"
	I0801 23:16:13.735649  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.3"
	I0801 23:16:13.738210  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.3"
	I0801 23:16:13.739792  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
	I0801 23:16:13.744902  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.3"
	I0801 23:16:13.776882  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
	I0801 23:16:14.166081  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0801 23:16:14.436496  164558 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0801 23:16:14.436608  164558 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
	I0801 23:16:14.436676  164558 ssh_runner.go:195] Run: which crictl
	I0801 23:16:14.554004  164558 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.3" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.3" does not exist at hash "d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db" in container runtime
	I0801 23:16:14.554059  164558 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.3
	I0801 23:16:14.554100  164558 ssh_runner.go:195] Run: which crictl
	I0801 23:16:14.629818  164558 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.3" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.3" does not exist at hash "586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f" in container runtime
	I0801 23:16:14.629867  164558 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.3
	I0801 23:16:14.629909  164558 ssh_runner.go:195] Run: which crictl
	I0801 23:16:14.648570  164558 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.3" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.3" does not exist at hash "2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302" in container runtime
	I0801 23:16:14.648680  164558 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.3
	I0801 23:16:14.648699  164558 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0801 23:16:14.648732  164558 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
	I0801 23:16:14.648746  164558 ssh_runner.go:195] Run: which crictl
	I0801 23:16:14.648796  164558 ssh_runner.go:195] Run: which crictl
	I0801 23:16:14.657393  164558 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.3" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.3" does not exist at hash "3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0" in container runtime
	I0801 23:16:14.657486  164558 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.3
	I0801 23:16:14.657543  164558 ssh_runner.go:195] Run: which crictl
	I0801 23:16:14.666435  164558 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0801 23:16:14.666480  164558 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
	I0801 23:16:14.666517  164558 ssh_runner.go:195] Run: which crictl
	I0801 23:16:14.785138  164558 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0801 23:16:14.785184  164558 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 23:16:14.785225  164558 ssh_runner.go:195] Run: which crictl
	I0801 23:16:14.785229  164558 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
	I0801 23:16:14.785257  164558 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.3
	I0801 23:16:14.785313  164558 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.3
	I0801 23:16:14.785371  164558 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.3
	I0801 23:16:14.785420  164558 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
	I0801 23:16:14.785471  164558 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.3
	I0801 23:16:14.785521  164558 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
	I0801 23:16:17.309092  164558 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.3: (2.523803187s)
	I0801 23:16:17.309117  164558 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.3: (2.5237837s)
	I0801 23:16:17.309119  164558 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3
	I0801 23:16:17.309125  164558 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3
	I0801 23:16:17.309198  164558 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.3
	I0801 23:16:17.309198  164558 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.3
	I0801 23:16:17.310779  164558 ssh_runner.go:235] Completed: which crictl: (2.525529828s)
	I0801 23:16:17.310839  164558 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 23:16:17.310840  164558 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6: (2.525584495s)
	I0801 23:16:17.310858  164558 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
	I0801 23:16:17.310916  164558 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.3: (2.525524793s)
	I0801 23:16:17.310925  164558 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3
	I0801 23:16:17.310935  164558 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0801 23:16:17.310996  164558 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.3
	I0801 23:16:17.311014  164558 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0: (2.525565992s)
	I0801 23:16:17.311024  164558 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
	I0801 23:16:17.311049  164558 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.3: (2.525563433s)
	I0801 23:16:17.311055  164558 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3
	I0801 23:16:17.311070  164558 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0801 23:16:17.311121  164558 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.3
	I0801 23:16:17.351643  164558 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0801 23:16:17.351748  164558 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0801 23:16:17.351777  164558 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.24.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.24.3': No such file or directory
	I0801 23:16:17.351746  164558 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.24.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.24.3': No such file or directory
	I0801 23:16:17.351819  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3 --> /var/lib/minikube/images/kube-controller-manager_v1.24.3 (31038464 bytes)
	I0801 23:16:17.351823  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3 --> /var/lib/minikube/images/kube-apiserver_v1.24.3 (33799168 bytes)
	I0801 23:16:17.351911  164558 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.24.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.24.3': No such file or directory
	I0801 23:16:17.351870  164558 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7: (2.566327475s)
	I0801 23:16:17.351940  164558 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
	I0801 23:16:17.351959  164558 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0801 23:16:17.351970  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0801 23:16:17.352016  164558 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0801 23:16:17.352041  164558 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0801 23:16:17.351927  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3 --> /var/lib/minikube/images/kube-proxy_v1.24.3 (39518208 bytes)
	I0801 23:16:17.352054  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0801 23:16:17.352104  164558 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.24.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.24.3': No such file or directory
	I0801 23:16:17.352115  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3 --> /var/lib/minikube/images/kube-scheduler_v1.24.3 (15491584 bytes)
	I0801 23:16:17.362762  164558 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0801 23:16:17.362798  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I0801 23:16:17.362864  164558 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0801 23:16:17.362903  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0801 23:16:17.461670  164558 containerd.go:227] Loading image: /var/lib/minikube/images/pause_3.7
	I0801 23:16:17.461823  164558 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I0801 23:16:17.704642  164558 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
	I0801 23:16:17.704683  164558 containerd.go:227] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0801 23:16:17.704734  164558 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I0801 23:16:18.555538  164558 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
	I0801 23:16:18.555580  164558 containerd.go:227] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0801 23:16:18.555622  164558 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0801 23:16:18.984301  164558 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0801 23:16:18.984343  164558 containerd.go:227] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.3
	I0801 23:16:18.984380  164558 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.3
	I0801 23:16:19.795026  164558 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.3 from cache
	I0801 23:16:19.795071  164558 containerd.go:227] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.3
	I0801 23:16:19.795125  164558 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.3
	I0801 23:16:21.926531  164558 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.3: (2.13138572s)
	I0801 23:16:21.926558  164558 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.3 from cache
	I0801 23:16:21.926598  164558 containerd.go:227] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.3
	I0801 23:16:21.926642  164558 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.3
	I0801 23:16:24.480013  164558 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.3: (2.553342357s)
	I0801 23:16:24.480042  164558 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.3 from cache
	I0801 23:16:24.480074  164558 containerd.go:227] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.3
	I0801 23:16:24.480116  164558 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.3
	I0801 23:16:25.404975  164558 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.3 from cache
	I0801 23:16:25.405032  164558 containerd.go:227] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0801 23:16:25.405080  164558 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I0801 23:16:29.111314  164558 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (3.706205128s)
	I0801 23:16:29.111346  164558 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
	I0801 23:16:29.111376  164558 cache_images.go:123] Successfully loaded all cached images
	I0801 23:16:29.111382  164558 cache_images.go:92] LoadImages completed in 15.82354347s
	I0801 23:16:29.111432  164558 ssh_runner.go:195] Run: sudo crictl info
	I0801 23:16:29.148342  164558 cni.go:95] Creating CNI manager for ""
	I0801 23:16:29.148375  164558 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0801 23:16:29.148390  164558 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 23:16:29.148411  164558 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220801231451-9849 NodeName:kubernetes-upgrade-20220801231451-9849 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cg
roupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 23:16:29.148587  164558 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-20220801231451-9849"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 23:16:29.148712  164558 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-20220801231451-9849 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220801231451-9849 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 23:16:29.148786  164558 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 23:16:29.157561  164558 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 23:16:29.157631  164558 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 23:16:29.165836  164558 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
	I0801 23:16:29.179517  164558 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 23:16:29.192673  164558 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0801 23:16:29.205355  164558 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0801 23:16:29.208341  164558 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 23:16:29.217062  164558 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801231451-9849 for IP: 192.168.67.2
	I0801 23:16:29.217154  164558 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 23:16:29.217198  164558 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 23:16:29.217259  164558 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801231451-9849/client.key
	I0801 23:16:29.217324  164558 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801231451-9849/apiserver.key.c7fa3a9e
	I0801 23:16:29.217357  164558 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801231451-9849/proxy-client.key
	I0801 23:16:29.217447  164558 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/9849.pem (1338 bytes)
	W0801 23:16:29.217477  164558 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/9849_empty.pem, impossibly tiny 0 bytes
	I0801 23:16:29.217491  164558 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 23:16:29.217515  164558 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 23:16:29.217541  164558 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 23:16:29.217564  164558 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1675 bytes)
	I0801 23:16:29.217611  164558 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/98492.pem (1708 bytes)
	I0801 23:16:29.218123  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801231451-9849/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 23:16:29.235503  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801231451-9849/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0801 23:16:29.254830  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801231451-9849/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 23:16:29.275137  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801231451-9849/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0801 23:16:29.291857  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 23:16:29.308185  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0801 23:16:29.324302  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 23:16:29.344944  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0801 23:16:29.366080  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 23:16:29.386941  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/9849.pem --> /usr/share/ca-certificates/9849.pem (1338 bytes)
	I0801 23:16:29.403164  164558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/98492.pem --> /usr/share/ca-certificates/98492.pem (1708 bytes)
	I0801 23:16:29.420176  164558 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 23:16:29.432416  164558 ssh_runner.go:195] Run: openssl version
	I0801 23:16:29.437674  164558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 23:16:29.445985  164558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 23:16:29.450362  164558 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0801 23:16:29.450423  164558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 23:16:29.455608  164558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 23:16:29.462766  164558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9849.pem && ln -fs /usr/share/ca-certificates/9849.pem /etc/ssl/certs/9849.pem"
	I0801 23:16:29.471298  164558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9849.pem
	I0801 23:16:29.474711  164558 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 22:50 /usr/share/ca-certificates/9849.pem
	I0801 23:16:29.474789  164558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9849.pem
	I0801 23:16:29.479929  164558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9849.pem /etc/ssl/certs/51391683.0"
	I0801 23:16:29.487775  164558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98492.pem && ln -fs /usr/share/ca-certificates/98492.pem /etc/ssl/certs/98492.pem"
	I0801 23:16:29.497117  164558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98492.pem
	I0801 23:16:29.501170  164558 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 22:50 /usr/share/ca-certificates/98492.pem
	I0801 23:16:29.501221  164558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98492.pem
	I0801 23:16:29.506535  164558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98492.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 23:16:29.513448  164558 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220801231451-9849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:kubernetes-upgrade-20220801231451-9849 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 23:16:29.513538  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0801 23:16:29.513591  164558 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0801 23:16:29.537883  164558 cri.go:87] found id: ""
	I0801 23:16:29.537948  164558 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 23:16:29.546929  164558 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 23:16:29.546950  164558 kubeadm.go:626] restartCluster start
	I0801 23:16:29.546998  164558 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 23:16:29.555447  164558 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:16:29.556477  164558 kubeconfig.go:116] verify returned: extract IP: "kubernetes-upgrade-20220801231451-9849" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 23:16:29.557019  164558 kubeconfig.go:127] "kubernetes-upgrade-20220801231451-9849" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 23:16:29.557817  164558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mk908131de2da31ada6455cebc27e25fe21e4ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 23:16:29.559046  164558 kapi.go:59] client config for kubernetes-upgrade-20220801231451-9849: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kubernetes-upgrade-20220801231451-9849/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/
profiles/kubernetes-upgrade-20220801231451-9849/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1740be0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0801 23:16:29.559615  164558 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 23:16:29.567806  164558 kubeadm.go:593] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-08-01 23:15:15.962767616 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-08-01 23:16:29.200501543 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.67.2
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.67.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-20220801231451-9849
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.24.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0801 23:16:29.567824  164558 kubeadm.go:1092] stopping kube-system containers ...
	I0801 23:16:29.567836  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0801 23:16:29.567880  164558 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0801 23:16:29.595144  164558 cri.go:87] found id: ""
	I0801 23:16:29.595206  164558 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 23:16:29.605757  164558 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 23:16:29.613130  164558 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5755 Aug  1 23:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5791 Aug  1 23:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5955 Aug  1 23:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5743 Aug  1 23:15 /etc/kubernetes/scheduler.conf
	
	I0801 23:16:29.613195  164558 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0801 23:16:29.621275  164558 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0801 23:16:29.629846  164558 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0801 23:16:29.636841  164558 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0801 23:16:29.645112  164558 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 23:16:29.652305  164558 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 23:16:29.652328  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 23:16:29.702218  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 23:16:30.278087  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 23:16:30.498127  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 23:16:30.562698  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 23:16:30.607957  164558 api_server.go:51] waiting for apiserver process to appear ...
	I0801 23:16:30.608031  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:31.117412  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:31.616887  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:32.117124  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:32.617702  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:33.117196  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:33.616900  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:34.116701  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:34.617701  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:35.117315  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:35.617263  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:36.117233  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:36.617298  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:37.117335  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:37.617462  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:38.117107  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:38.617236  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:39.117533  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:39.616942  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:40.117085  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:40.617079  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:41.117435  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:41.616713  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:42.117339  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:42.619438  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:43.117579  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:43.617365  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:44.117066  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:44.617534  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:45.117295  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:45.616695  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:46.117375  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:46.617658  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:47.117030  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:47.616981  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:48.116672  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:48.616727  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:49.116947  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:49.617009  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:50.116945  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:50.616909  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:51.116973  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:51.617001  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:52.117538  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:52.617397  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:53.117421  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:53.617488  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:54.117394  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:54.616668  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:55.117675  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:55.616801  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:56.117419  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:56.616897  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:57.117293  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:57.617641  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:58.117443  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:58.616761  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:59.116744  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:16:59.616817  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:00.117131  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:00.617488  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:01.117184  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:01.617501  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:02.117613  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:02.616953  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:03.117124  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:03.617411  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:04.116718  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:04.617039  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:05.116889  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:05.617005  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:06.117436  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:06.617247  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:07.116745  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:07.617015  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:08.117363  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:08.617397  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:09.117367  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:09.617739  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:10.116728  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:10.617302  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:11.116742  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:11.617075  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:12.117430  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:12.617517  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:13.117066  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:13.617486  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:14.117069  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:14.617467  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:15.116762  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:15.617082  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:16.117509  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:16.616851  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:17.117562  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:17.616733  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:18.116853  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:18.617110  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:19.116730  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:19.617037  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:20.116855  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:20.616793  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:21.117582  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:21.617393  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:22.117702  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:22.616969  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:23.117090  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:23.617362  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:24.116683  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:24.617504  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:25.117503  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:25.616733  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:26.116689  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:26.617557  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:27.117366  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:27.617417  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:28.116936  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:28.617265  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:29.117359  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:29.617035  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:30.117465  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:30.617708  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:17:30.617796  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:17:30.647647  164558 cri.go:87] found id: ""
	I0801 23:17:30.647688  164558 logs.go:274] 0 containers: []
	W0801 23:17:30.647698  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:17:30.647707  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:17:30.647763  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:17:30.685015  164558 cri.go:87] found id: ""
	I0801 23:17:30.685042  164558 logs.go:274] 0 containers: []
	W0801 23:17:30.685052  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:17:30.685059  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:17:30.685116  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:17:30.713197  164558 cri.go:87] found id: ""
	I0801 23:17:30.713228  164558 logs.go:274] 0 containers: []
	W0801 23:17:30.713236  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:17:30.713244  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:17:30.713294  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:17:30.738247  164558 cri.go:87] found id: ""
	I0801 23:17:30.738275  164558 logs.go:274] 0 containers: []
	W0801 23:17:30.738283  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:17:30.738291  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:17:30.738383  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:17:30.768779  164558 cri.go:87] found id: ""
	I0801 23:17:30.768808  164558 logs.go:274] 0 containers: []
	W0801 23:17:30.768817  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:17:30.768824  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:17:30.768899  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:17:30.800295  164558 cri.go:87] found id: ""
	I0801 23:17:30.800320  164558 logs.go:274] 0 containers: []
	W0801 23:17:30.800329  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:17:30.800346  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:17:30.800399  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:17:30.827166  164558 cri.go:87] found id: ""
	I0801 23:17:30.827194  164558 logs.go:274] 0 containers: []
	W0801 23:17:30.827202  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:17:30.827210  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:17:30.827267  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:17:30.851680  164558 cri.go:87] found id: ""
	I0801 23:17:30.851706  164558 logs.go:274] 0 containers: []
	W0801 23:17:30.851713  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:17:30.851724  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:17:30.851748  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:17:30.869063  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:17:30.869096  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:17:30.937776  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:17:30.937806  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:17:30.937827  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:17:30.988106  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:17:30.988149  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:17:31.027146  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:17:31.027185  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:17:31.100714  164558 logs.go:138] Found kubelet problem: Aug 01 23:17:30 kubernetes-upgrade-20220801231451-9849 kubelet[2356]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:17:31.160955  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:17:31.160995  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:17:31.161183  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:17:31.161205  164558 out.go:239]   Aug 01 23:17:30 kubernetes-upgrade-20220801231451-9849 kubelet[2356]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:17:30 kubernetes-upgrade-20220801231451-9849 kubelet[2356]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:17:31.161212  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:17:31.161221  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:17:41.162030  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:41.616729  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:17:41.616805  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:17:41.639709  164558 cri.go:87] found id: ""
	I0801 23:17:41.639732  164558 logs.go:274] 0 containers: []
	W0801 23:17:41.639738  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:17:41.639745  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:17:41.639788  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:17:41.663832  164558 cri.go:87] found id: ""
	I0801 23:17:41.663860  164558 logs.go:274] 0 containers: []
	W0801 23:17:41.663868  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:17:41.663876  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:17:41.663936  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:17:41.687099  164558 cri.go:87] found id: ""
	I0801 23:17:41.687122  164558 logs.go:274] 0 containers: []
	W0801 23:17:41.687131  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:17:41.687139  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:17:41.687190  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:17:41.711887  164558 cri.go:87] found id: ""
	I0801 23:17:41.711919  164558 logs.go:274] 0 containers: []
	W0801 23:17:41.711929  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:17:41.711937  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:17:41.711991  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:17:41.736732  164558 cri.go:87] found id: ""
	I0801 23:17:41.736758  164558 logs.go:274] 0 containers: []
	W0801 23:17:41.736766  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:17:41.736773  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:17:41.736830  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:17:41.762052  164558 cri.go:87] found id: ""
	I0801 23:17:41.762091  164558 logs.go:274] 0 containers: []
	W0801 23:17:41.762101  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:17:41.762109  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:17:41.762162  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:17:41.786854  164558 cri.go:87] found id: ""
	I0801 23:17:41.786878  164558 logs.go:274] 0 containers: []
	W0801 23:17:41.786887  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:17:41.786894  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:17:41.786953  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:17:41.811338  164558 cri.go:87] found id: ""
	I0801 23:17:41.811362  164558 logs.go:274] 0 containers: []
	W0801 23:17:41.811368  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:17:41.811376  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:17:41.811386  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:17:41.867053  164558 logs.go:138] Found kubelet problem: Aug 01 23:17:41 kubernetes-upgrade-20220801231451-9849 kubelet[2703]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:17:41.912126  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:17:41.912161  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:17:41.927541  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:17:41.927581  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:17:41.979082  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:17:41.979112  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:17:41.979132  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:17:42.013098  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:17:42.013127  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:17:42.039591  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:17:42.039619  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:17:42.039764  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:17:42.039786  164558 out.go:239]   Aug 01 23:17:41 kubernetes-upgrade-20220801231451-9849 kubelet[2703]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:17:41 kubernetes-upgrade-20220801231451-9849 kubelet[2703]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:17:42.039795  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:17:42.039801  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:17:52.040932  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:17:52.117507  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:17:52.117576  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:17:52.142110  164558 cri.go:87] found id: ""
	I0801 23:17:52.142137  164558 logs.go:274] 0 containers: []
	W0801 23:17:52.142143  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:17:52.142155  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:17:52.142216  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:17:52.167730  164558 cri.go:87] found id: ""
	I0801 23:17:52.167757  164558 logs.go:274] 0 containers: []
	W0801 23:17:52.167765  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:17:52.167772  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:17:52.167822  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:17:52.196028  164558 cri.go:87] found id: ""
	I0801 23:17:52.196057  164558 logs.go:274] 0 containers: []
	W0801 23:17:52.196065  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:17:52.196072  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:17:52.196126  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:17:52.225652  164558 cri.go:87] found id: ""
	I0801 23:17:52.225678  164558 logs.go:274] 0 containers: []
	W0801 23:17:52.225684  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:17:52.225691  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:17:52.225745  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:17:52.254252  164558 cri.go:87] found id: ""
	I0801 23:17:52.254279  164558 logs.go:274] 0 containers: []
	W0801 23:17:52.254292  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:17:52.254302  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:17:52.254446  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:17:52.283452  164558 cri.go:87] found id: ""
	I0801 23:17:52.283483  164558 logs.go:274] 0 containers: []
	W0801 23:17:52.283491  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:17:52.283500  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:17:52.283560  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:17:52.312407  164558 cri.go:87] found id: ""
	I0801 23:17:52.312438  164558 logs.go:274] 0 containers: []
	W0801 23:17:52.312448  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:17:52.312459  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:17:52.312515  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:17:52.344774  164558 cri.go:87] found id: ""
	I0801 23:17:52.344798  164558 logs.go:274] 0 containers: []
	W0801 23:17:52.344807  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:17:52.344818  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:17:52.344833  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:17:52.402385  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:17:52.402427  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:17:52.402453  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:17:52.454992  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:17:52.455033  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:17:52.484955  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:17:52.484986  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:17:52.534005  164558 logs.go:138] Found kubelet problem: Aug 01 23:17:52 kubernetes-upgrade-20220801231451-9849 kubelet[2986]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:17:52.604501  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:17:52.604535  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:17:52.618840  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:17:52.618863  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:17:52.618961  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:17:52.618974  164558 out.go:239]   Aug 01 23:17:52 kubernetes-upgrade-20220801231451-9849 kubelet[2986]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:17:52 kubernetes-upgrade-20220801231451-9849 kubelet[2986]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:17:52.618978  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:17:52.618984  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:18:02.619910  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:18:03.117196  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:18:03.117259  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:18:03.143868  164558 cri.go:87] found id: ""
	I0801 23:18:03.143895  164558 logs.go:274] 0 containers: []
	W0801 23:18:03.143904  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:18:03.143912  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:18:03.143972  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:18:03.175412  164558 cri.go:87] found id: ""
	I0801 23:18:03.175439  164558 logs.go:274] 0 containers: []
	W0801 23:18:03.175448  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:18:03.175455  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:18:03.175498  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:18:03.198121  164558 cri.go:87] found id: ""
	I0801 23:18:03.198143  164558 logs.go:274] 0 containers: []
	W0801 23:18:03.198149  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:18:03.198155  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:18:03.198194  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:18:03.222480  164558 cri.go:87] found id: ""
	I0801 23:18:03.222507  164558 logs.go:274] 0 containers: []
	W0801 23:18:03.222513  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:18:03.222519  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:18:03.222565  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:18:03.252344  164558 cri.go:87] found id: ""
	I0801 23:18:03.252371  164558 logs.go:274] 0 containers: []
	W0801 23:18:03.252379  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:18:03.252386  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:18:03.252444  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:18:03.280098  164558 cri.go:87] found id: ""
	I0801 23:18:03.280127  164558 logs.go:274] 0 containers: []
	W0801 23:18:03.280136  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:18:03.280147  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:18:03.280202  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:18:03.304224  164558 cri.go:87] found id: ""
	I0801 23:18:03.304248  164558 logs.go:274] 0 containers: []
	W0801 23:18:03.304255  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:18:03.304261  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:18:03.304306  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:18:03.327566  164558 cri.go:87] found id: ""
	I0801 23:18:03.327590  164558 logs.go:274] 0 containers: []
	W0801 23:18:03.327597  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:18:03.327605  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:18:03.327620  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:18:03.344434  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:18:03.344473  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:18:03.402962  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:18:03.402982  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:18:03.402991  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:18:03.446997  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:18:03.447039  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:18:03.489795  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:18:03.489838  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:18:03.566253  164558 logs.go:138] Found kubelet problem: Aug 01 23:18:02 kubernetes-upgrade-20220801231451-9849 kubelet[3229]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:18:03.627861  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:18:03.627891  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:18:03.627993  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:18:03.628006  164558 out.go:239]   Aug 01 23:18:02 kubernetes-upgrade-20220801231451-9849 kubelet[3229]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:18:02 kubernetes-upgrade-20220801231451-9849 kubelet[3229]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:18:03.628010  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:18:03.628015  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:18:13.630126  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:18:14.117424  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:18:14.117504  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:18:14.141439  164558 cri.go:87] found id: ""
	I0801 23:18:14.141475  164558 logs.go:274] 0 containers: []
	W0801 23:18:14.141483  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:18:14.141490  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:18:14.141552  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:18:14.164560  164558 cri.go:87] found id: ""
	I0801 23:18:14.164581  164558 logs.go:274] 0 containers: []
	W0801 23:18:14.164587  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:18:14.164592  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:18:14.164633  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:18:14.186834  164558 cri.go:87] found id: ""
	I0801 23:18:14.186861  164558 logs.go:274] 0 containers: []
	W0801 23:18:14.186867  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:18:14.186874  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:18:14.186920  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:18:14.208574  164558 cri.go:87] found id: ""
	I0801 23:18:14.208607  164558 logs.go:274] 0 containers: []
	W0801 23:18:14.208615  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:18:14.208622  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:18:14.208694  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:18:14.231581  164558 cri.go:87] found id: ""
	I0801 23:18:14.231608  164558 logs.go:274] 0 containers: []
	W0801 23:18:14.231631  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:18:14.231641  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:18:14.231696  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:18:14.253902  164558 cri.go:87] found id: ""
	I0801 23:18:14.253926  164558 logs.go:274] 0 containers: []
	W0801 23:18:14.253934  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:18:14.253947  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:18:14.253998  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:18:14.276343  164558 cri.go:87] found id: ""
	I0801 23:18:14.276366  164558 logs.go:274] 0 containers: []
	W0801 23:18:14.276375  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:18:14.276382  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:18:14.276433  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:18:14.300188  164558 cri.go:87] found id: ""
	I0801 23:18:14.300208  164558 logs.go:274] 0 containers: []
	W0801 23:18:14.300214  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:18:14.300223  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:18:14.300236  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:18:14.355988  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:18:14.356010  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:18:14.356021  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:18:14.395156  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:18:14.395195  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:18:14.420662  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:18:14.420692  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:18:14.465966  164558 logs.go:138] Found kubelet problem: Aug 01 23:18:14 kubernetes-upgrade-20220801231451-9849 kubelet[3533]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:18:14.511361  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:18:14.511393  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:18:14.526201  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:18:14.526228  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:18:14.526422  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:18:14.526441  164558 out.go:239]   Aug 01 23:18:14 kubernetes-upgrade-20220801231451-9849 kubelet[3533]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:18:14 kubernetes-upgrade-20220801231451-9849 kubelet[3533]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:18:14.526447  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:18:14.526457  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:18:24.527759  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:18:24.616993  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:18:24.617055  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:18:24.652674  164558 cri.go:87] found id: ""
	I0801 23:18:24.652698  164558 logs.go:274] 0 containers: []
	W0801 23:18:24.652706  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:18:24.652714  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:18:24.652761  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:18:24.681644  164558 cri.go:87] found id: ""
	I0801 23:18:24.681675  164558 logs.go:274] 0 containers: []
	W0801 23:18:24.681683  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:18:24.681691  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:18:24.681755  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:18:24.704822  164558 cri.go:87] found id: ""
	I0801 23:18:24.704853  164558 logs.go:274] 0 containers: []
	W0801 23:18:24.704862  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:18:24.704871  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:18:24.704942  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:18:24.728328  164558 cri.go:87] found id: ""
	I0801 23:18:24.728351  164558 logs.go:274] 0 containers: []
	W0801 23:18:24.728357  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:18:24.728363  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:18:24.728412  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:18:24.764595  164558 cri.go:87] found id: ""
	I0801 23:18:24.764623  164558 logs.go:274] 0 containers: []
	W0801 23:18:24.764633  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:18:24.764644  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:18:24.764698  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:18:24.791318  164558 cri.go:87] found id: ""
	I0801 23:18:24.791347  164558 logs.go:274] 0 containers: []
	W0801 23:18:24.791356  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:18:24.791364  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:18:24.791416  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:18:24.815569  164558 cri.go:87] found id: ""
	I0801 23:18:24.815595  164558 logs.go:274] 0 containers: []
	W0801 23:18:24.815603  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:18:24.815611  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:18:24.815673  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:18:24.852570  164558 cri.go:87] found id: ""
	I0801 23:18:24.852604  164558 logs.go:274] 0 containers: []
	W0801 23:18:24.852612  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:18:24.852624  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:18:24.852638  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:18:24.912358  164558 logs.go:138] Found kubelet problem: Aug 01 23:18:24 kubernetes-upgrade-20220801231451-9849 kubelet[3827]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:18:24.971719  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:18:24.971756  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:18:24.986510  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:18:24.986537  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:18:25.042173  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:18:25.042215  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:18:25.042232  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:18:25.087467  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:18:25.087503  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:18:25.114473  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:18:25.114499  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:18:25.114626  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:18:25.114640  164558 out.go:239]   Aug 01 23:18:24 kubernetes-upgrade-20220801231451-9849 kubelet[3827]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:18:24 kubernetes-upgrade-20220801231451-9849 kubelet[3827]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:18:25.114650  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:18:25.114658  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:18:35.115447  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:18:35.617074  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:18:35.617155  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:18:35.640512  164558 cri.go:87] found id: ""
	I0801 23:18:35.640539  164558 logs.go:274] 0 containers: []
	W0801 23:18:35.640547  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:18:35.640559  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:18:35.640612  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:18:35.663179  164558 cri.go:87] found id: ""
	I0801 23:18:35.663235  164558 logs.go:274] 0 containers: []
	W0801 23:18:35.663244  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:18:35.663252  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:18:35.663304  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:18:35.686484  164558 cri.go:87] found id: ""
	I0801 23:18:35.686509  164558 logs.go:274] 0 containers: []
	W0801 23:18:35.686517  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:18:35.686524  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:18:35.686578  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:18:35.709526  164558 cri.go:87] found id: ""
	I0801 23:18:35.709558  164558 logs.go:274] 0 containers: []
	W0801 23:18:35.709569  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:18:35.709578  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:18:35.709631  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:18:35.734780  164558 cri.go:87] found id: ""
	I0801 23:18:35.734810  164558 logs.go:274] 0 containers: []
	W0801 23:18:35.734823  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:18:35.734830  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:18:35.734887  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:18:35.759848  164558 cri.go:87] found id: ""
	I0801 23:18:35.759880  164558 logs.go:274] 0 containers: []
	W0801 23:18:35.759893  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:18:35.759901  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:18:35.759956  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:18:35.783928  164558 cri.go:87] found id: ""
	I0801 23:18:35.783949  164558 logs.go:274] 0 containers: []
	W0801 23:18:35.783955  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:18:35.783961  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:18:35.784003  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:18:35.807843  164558 cri.go:87] found id: ""
	I0801 23:18:35.807864  164558 logs.go:274] 0 containers: []
	W0801 23:18:35.807870  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:18:35.807880  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:18:35.807892  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:18:35.856266  164558 logs.go:138] Found kubelet problem: Aug 01 23:18:35 kubernetes-upgrade-20220801231451-9849 kubelet[4178]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:18:35.901955  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:18:35.901989  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:18:35.916430  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:18:35.916455  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:18:35.965773  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:18:35.965796  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:18:35.965806  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:18:36.000469  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:18:36.000500  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:18:36.027308  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:18:36.027332  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:18:36.027448  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:18:36.027462  164558 out.go:239]   Aug 01 23:18:35 kubernetes-upgrade-20220801231451-9849 kubelet[4178]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:18:35 kubernetes-upgrade-20220801231451-9849 kubelet[4178]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:18:36.027466  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:18:36.027471  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:18:46.029443  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:18:46.117334  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:18:46.117412  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:18:46.140952  164558 cri.go:87] found id: ""
	I0801 23:18:46.140975  164558 logs.go:274] 0 containers: []
	W0801 23:18:46.140981  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:18:46.140986  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:18:46.141036  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:18:46.163861  164558 cri.go:87] found id: ""
	I0801 23:18:46.163888  164558 logs.go:274] 0 containers: []
	W0801 23:18:46.163896  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:18:46.163902  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:18:46.163946  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:18:46.185658  164558 cri.go:87] found id: ""
	I0801 23:18:46.185680  164558 logs.go:274] 0 containers: []
	W0801 23:18:46.185686  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:18:46.185691  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:18:46.185731  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:18:46.208161  164558 cri.go:87] found id: ""
	I0801 23:18:46.208192  164558 logs.go:274] 0 containers: []
	W0801 23:18:46.208201  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:18:46.208209  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:18:46.208260  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:18:46.232778  164558 cri.go:87] found id: ""
	I0801 23:18:46.232809  164558 logs.go:274] 0 containers: []
	W0801 23:18:46.232818  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:18:46.232826  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:18:46.232882  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:18:46.259373  164558 cri.go:87] found id: ""
	I0801 23:18:46.259398  164558 logs.go:274] 0 containers: []
	W0801 23:18:46.259410  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:18:46.259418  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:18:46.259475  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:18:46.283484  164558 cri.go:87] found id: ""
	I0801 23:18:46.283504  164558 logs.go:274] 0 containers: []
	W0801 23:18:46.283511  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:18:46.283521  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:18:46.283561  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:18:46.305851  164558 cri.go:87] found id: ""
	I0801 23:18:46.305874  164558 logs.go:274] 0 containers: []
	W0801 23:18:46.305880  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:18:46.305888  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:18:46.305900  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:18:46.350738  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:18:46.350767  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:18:46.375547  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:18:46.375574  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:18:46.424921  164558 logs.go:138] Found kubelet problem: Aug 01 23:18:46 kubernetes-upgrade-20220801231451-9849 kubelet[4474]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:18:46.470214  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:18:46.470243  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:18:46.484796  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:18:46.484823  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:18:46.534783  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:18:46.534811  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:18:46.534828  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:18:46.534937  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:18:46.534948  164558 out.go:239]   Aug 01 23:18:46 kubernetes-upgrade-20220801231451-9849 kubelet[4474]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:18:46 kubernetes-upgrade-20220801231451-9849 kubelet[4474]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:18:46.534953  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:18:46.534958  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:18:56.535289  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:18:56.617359  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:18:56.617427  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:18:56.641261  164558 cri.go:87] found id: ""
	I0801 23:18:56.641282  164558 logs.go:274] 0 containers: []
	W0801 23:18:56.641289  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:18:56.641295  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:18:56.641335  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:18:56.664071  164558 cri.go:87] found id: ""
	I0801 23:18:56.664097  164558 logs.go:274] 0 containers: []
	W0801 23:18:56.664107  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:18:56.664113  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:18:56.664159  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:18:56.686489  164558 cri.go:87] found id: ""
	I0801 23:18:56.686515  164558 logs.go:274] 0 containers: []
	W0801 23:18:56.686523  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:18:56.686531  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:18:56.686586  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:18:56.709339  164558 cri.go:87] found id: ""
	I0801 23:18:56.709366  164558 logs.go:274] 0 containers: []
	W0801 23:18:56.709373  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:18:56.709379  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:18:56.709425  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:18:56.733568  164558 cri.go:87] found id: ""
	I0801 23:18:56.733594  164558 logs.go:274] 0 containers: []
	W0801 23:18:56.733601  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:18:56.733608  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:18:56.733664  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:18:56.758298  164558 cri.go:87] found id: ""
	I0801 23:18:56.758325  164558 logs.go:274] 0 containers: []
	W0801 23:18:56.758333  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:18:56.758369  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:18:56.758426  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:18:56.782564  164558 cri.go:87] found id: ""
	I0801 23:18:56.782595  164558 logs.go:274] 0 containers: []
	W0801 23:18:56.782606  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:18:56.782613  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:18:56.782667  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:18:56.805320  164558 cri.go:87] found id: ""
	I0801 23:18:56.805343  164558 logs.go:274] 0 containers: []
	W0801 23:18:56.805350  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:18:56.805361  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:18:56.805374  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:18:56.858003  164558 logs.go:138] Found kubelet problem: Aug 01 23:18:56 kubernetes-upgrade-20220801231451-9849 kubelet[4772]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:18:56.902963  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:18:56.902996  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:18:56.917700  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:18:56.917728  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:18:56.966828  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:18:56.966850  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:18:56.966862  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:18:57.001098  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:18:57.001124  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:18:57.027278  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:18:57.027298  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:18:57.027424  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:18:57.027444  164558 out.go:239]   Aug 01 23:18:56 kubernetes-upgrade-20220801231451-9849 kubelet[4772]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:18:56 kubernetes-upgrade-20220801231451-9849 kubelet[4772]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:18:57.027451  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:18:57.027457  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:19:07.028183  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:19:07.117233  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:19:07.117294  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:19:07.140772  164558 cri.go:87] found id: ""
	I0801 23:19:07.140800  164558 logs.go:274] 0 containers: []
	W0801 23:19:07.140808  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:19:07.140817  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:19:07.140869  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:19:07.163485  164558 cri.go:87] found id: ""
	I0801 23:19:07.163514  164558 logs.go:274] 0 containers: []
	W0801 23:19:07.163523  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:19:07.163529  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:19:07.163592  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:19:07.186455  164558 cri.go:87] found id: ""
	I0801 23:19:07.186480  164558 logs.go:274] 0 containers: []
	W0801 23:19:07.186486  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:19:07.186491  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:19:07.186544  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:19:07.208664  164558 cri.go:87] found id: ""
	I0801 23:19:07.208688  164558 logs.go:274] 0 containers: []
	W0801 23:19:07.208697  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:19:07.208704  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:19:07.208754  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:19:07.233295  164558 cri.go:87] found id: ""
	I0801 23:19:07.233324  164558 logs.go:274] 0 containers: []
	W0801 23:19:07.233332  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:19:07.233340  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:19:07.233394  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:19:07.258056  164558 cri.go:87] found id: ""
	I0801 23:19:07.258088  164558 logs.go:274] 0 containers: []
	W0801 23:19:07.258097  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:19:07.258106  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:19:07.258166  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:19:07.283864  164558 cri.go:87] found id: ""
	I0801 23:19:07.283893  164558 logs.go:274] 0 containers: []
	W0801 23:19:07.283903  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:19:07.283910  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:19:07.283965  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:19:07.306719  164558 cri.go:87] found id: ""
	I0801 23:19:07.306743  164558 logs.go:274] 0 containers: []
	W0801 23:19:07.306754  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:19:07.306768  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:19:07.306785  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:19:07.366956  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:19:07.366979  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:19:07.366988  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:19:07.403004  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:19:07.403031  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:19:07.428801  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:19:07.428826  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:19:07.478308  164558 logs.go:138] Found kubelet problem: Aug 01 23:19:07 kubernetes-upgrade-20220801231451-9849 kubelet[5068]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:19:07.523635  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:19:07.523665  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:19:07.538212  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:19:07.538236  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:19:07.538366  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:19:07.538381  164558 out.go:239]   Aug 01 23:19:07 kubernetes-upgrade-20220801231451-9849 kubelet[5068]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:19:07 kubernetes-upgrade-20220801231451-9849 kubelet[5068]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:19:07.538387  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:19:07.538395  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:19:17.539800  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:19:17.616969  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:19:17.617041  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:19:17.642239  164558 cri.go:87] found id: ""
	I0801 23:19:17.642268  164558 logs.go:274] 0 containers: []
	W0801 23:19:17.642281  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:19:17.642289  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:19:17.642347  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:19:17.665991  164558 cri.go:87] found id: ""
	I0801 23:19:17.666016  164558 logs.go:274] 0 containers: []
	W0801 23:19:17.666024  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:19:17.666032  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:19:17.666084  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:19:17.688139  164558 cri.go:87] found id: ""
	I0801 23:19:17.688160  164558 logs.go:274] 0 containers: []
	W0801 23:19:17.688166  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:19:17.688172  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:19:17.688211  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:19:17.710158  164558 cri.go:87] found id: ""
	I0801 23:19:17.710178  164558 logs.go:274] 0 containers: []
	W0801 23:19:17.710184  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:19:17.710190  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:19:17.710228  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:19:17.733751  164558 cri.go:87] found id: ""
	I0801 23:19:17.733780  164558 logs.go:274] 0 containers: []
	W0801 23:19:17.733787  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:19:17.733793  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:19:17.733843  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:19:17.761225  164558 cri.go:87] found id: ""
	I0801 23:19:17.761248  164558 logs.go:274] 0 containers: []
	W0801 23:19:17.761253  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:19:17.761259  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:19:17.761307  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:19:17.785044  164558 cri.go:87] found id: ""
	I0801 23:19:17.785066  164558 logs.go:274] 0 containers: []
	W0801 23:19:17.785072  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:19:17.785078  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:19:17.785128  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:19:17.807200  164558 cri.go:87] found id: ""
	I0801 23:19:17.807221  164558 logs.go:274] 0 containers: []
	W0801 23:19:17.807227  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:19:17.807235  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:19:17.807245  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:19:17.850766  164558 logs.go:138] Found kubelet problem: Aug 01 23:19:17 kubernetes-upgrade-20220801231451-9849 kubelet[5366]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:19:17.895520  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:19:17.895552  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:19:17.909761  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:19:17.909783  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:19:17.958071  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:19:17.958094  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:19:17.958105  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:19:17.993347  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:19:17.993382  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:19:18.019410  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:19:18.019436  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:19:18.019542  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:19:18.019555  164558 out.go:239]   Aug 01 23:19:17 kubernetes-upgrade-20220801231451-9849 kubelet[5366]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:19:17 kubernetes-upgrade-20220801231451-9849 kubelet[5366]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:19:18.019571  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:19:18.019576  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:19:28.021515  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:19:28.116757  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:19:28.116827  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:19:28.142423  164558 cri.go:87] found id: ""
	I0801 23:19:28.142458  164558 logs.go:274] 0 containers: []
	W0801 23:19:28.142467  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:19:28.142476  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:19:28.142528  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:19:28.164944  164558 cri.go:87] found id: ""
	I0801 23:19:28.164972  164558 logs.go:274] 0 containers: []
	W0801 23:19:28.164981  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:19:28.164990  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:19:28.165044  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:19:28.187750  164558 cri.go:87] found id: ""
	I0801 23:19:28.187772  164558 logs.go:274] 0 containers: []
	W0801 23:19:28.187778  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:19:28.187784  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:19:28.187823  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:19:28.210386  164558 cri.go:87] found id: ""
	I0801 23:19:28.210412  164558 logs.go:274] 0 containers: []
	W0801 23:19:28.210421  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:19:28.210429  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:19:28.210481  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:19:28.238533  164558 cri.go:87] found id: ""
	I0801 23:19:28.238562  164558 logs.go:274] 0 containers: []
	W0801 23:19:28.238570  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:19:28.238578  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:19:28.238634  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:19:28.265466  164558 cri.go:87] found id: ""
	I0801 23:19:28.265498  164558 logs.go:274] 0 containers: []
	W0801 23:19:28.265508  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:19:28.265516  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:19:28.265572  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:19:28.297280  164558 cri.go:87] found id: ""
	I0801 23:19:28.297307  164558 logs.go:274] 0 containers: []
	W0801 23:19:28.297315  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:19:28.297324  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:19:28.297375  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:19:28.334041  164558 cri.go:87] found id: ""
	I0801 23:19:28.334074  164558 logs.go:274] 0 containers: []
	W0801 23:19:28.334082  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:19:28.334093  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:19:28.334107  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:19:28.390054  164558 logs.go:138] Found kubelet problem: Aug 01 23:19:28 kubernetes-upgrade-20220801231451-9849 kubelet[5666]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:19:28.444683  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:19:28.444726  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:19:28.461033  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:19:28.461065  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:19:28.515403  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:19:28.515431  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:19:28.515445  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:19:28.556533  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:19:28.556566  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:19:28.586943  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:19:28.586970  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:19:28.587103  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:19:28.587119  164558 out.go:239]   Aug 01 23:19:28 kubernetes-upgrade-20220801231451-9849 kubelet[5666]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:19:28 kubernetes-upgrade-20220801231451-9849 kubelet[5666]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:19:28.587125  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:19:28.587134  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:19:38.587464  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:19:38.617427  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:19:38.617507  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:19:38.640628  164558 cri.go:87] found id: ""
	I0801 23:19:38.640657  164558 logs.go:274] 0 containers: []
	W0801 23:19:38.640667  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:19:38.640675  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:19:38.640730  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:19:38.663825  164558 cri.go:87] found id: ""
	I0801 23:19:38.663850  164558 logs.go:274] 0 containers: []
	W0801 23:19:38.663858  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:19:38.663866  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:19:38.663920  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:19:38.686893  164558 cri.go:87] found id: ""
	I0801 23:19:38.686915  164558 logs.go:274] 0 containers: []
	W0801 23:19:38.686922  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:19:38.686929  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:19:38.686977  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:19:38.710997  164558 cri.go:87] found id: ""
	I0801 23:19:38.711026  164558 logs.go:274] 0 containers: []
	W0801 23:19:38.711034  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:19:38.711042  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:19:38.711097  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:19:38.738277  164558 cri.go:87] found id: ""
	I0801 23:19:38.738302  164558 logs.go:274] 0 containers: []
	W0801 23:19:38.738311  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:19:38.738319  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:19:38.738412  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:19:38.764114  164558 cri.go:87] found id: ""
	I0801 23:19:38.764137  164558 logs.go:274] 0 containers: []
	W0801 23:19:38.764145  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:19:38.764155  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:19:38.764210  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:19:38.788908  164558 cri.go:87] found id: ""
	I0801 23:19:38.788935  164558 logs.go:274] 0 containers: []
	W0801 23:19:38.788941  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:19:38.788948  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:19:38.788989  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:19:38.831895  164558 cri.go:87] found id: ""
	I0801 23:19:38.831918  164558 logs.go:274] 0 containers: []
	W0801 23:19:38.831927  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:19:38.831939  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:19:38.831954  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:19:38.846315  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:19:38.846379  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:19:38.900295  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:19:38.900347  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:19:38.900363  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:19:38.939323  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:19:38.939356  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:19:38.965282  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:19:38.965313  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:19:39.009804  164558 logs.go:138] Found kubelet problem: Aug 01 23:19:38 kubernetes-upgrade-20220801231451-9849 kubelet[5957]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:19:39.055293  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:19:39.055319  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:19:39.055418  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:19:39.055430  164558 out.go:239]   Aug 01 23:19:38 kubernetes-upgrade-20220801231451-9849 kubelet[5957]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:19:38 kubernetes-upgrade-20220801231451-9849 kubelet[5957]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:19:39.055434  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:19:39.055438  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:19:49.056232  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:19:49.117263  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:19:49.117320  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:19:49.139919  164558 cri.go:87] found id: ""
	I0801 23:19:49.139942  164558 logs.go:274] 0 containers: []
	W0801 23:19:49.139949  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:19:49.139955  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:19:49.140000  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:19:49.162894  164558 cri.go:87] found id: ""
	I0801 23:19:49.162922  164558 logs.go:274] 0 containers: []
	W0801 23:19:49.162929  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:19:49.162937  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:19:49.162997  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:19:49.185007  164558 cri.go:87] found id: ""
	I0801 23:19:49.185034  164558 logs.go:274] 0 containers: []
	W0801 23:19:49.185042  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:19:49.185049  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:19:49.185099  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:19:49.208414  164558 cri.go:87] found id: ""
	I0801 23:19:49.208443  164558 logs.go:274] 0 containers: []
	W0801 23:19:49.208452  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:19:49.208460  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:19:49.208506  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:19:49.232473  164558 cri.go:87] found id: ""
	I0801 23:19:49.232495  164558 logs.go:274] 0 containers: []
	W0801 23:19:49.232502  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:19:49.232507  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:19:49.232559  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:19:49.257699  164558 cri.go:87] found id: ""
	I0801 23:19:49.257724  164558 logs.go:274] 0 containers: []
	W0801 23:19:49.257732  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:19:49.257743  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:19:49.257794  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:19:49.282474  164558 cri.go:87] found id: ""
	I0801 23:19:49.282493  164558 logs.go:274] 0 containers: []
	W0801 23:19:49.282499  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:19:49.282504  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:19:49.282541  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:19:49.305481  164558 cri.go:87] found id: ""
	I0801 23:19:49.305508  164558 logs.go:274] 0 containers: []
	W0801 23:19:49.305517  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:19:49.305526  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:19:49.305537  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:19:49.367094  164558 logs.go:138] Found kubelet problem: Aug 01 23:19:49 kubernetes-upgrade-20220801231451-9849 kubelet[6254]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:19:49.412531  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:19:49.412562  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:19:49.428021  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:19:49.428048  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:19:49.478516  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:19:49.478544  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:19:49.478556  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:19:49.514870  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:19:49.514899  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:19:49.539569  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:19:49.539591  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:19:49.539709  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:19:49.539725  164558 out.go:239]   Aug 01 23:19:49 kubernetes-upgrade-20220801231451-9849 kubelet[6254]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:19:49 kubernetes-upgrade-20220801231451-9849 kubelet[6254]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:19:49.539733  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:19:49.539740  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:19:59.540436  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:19:59.617567  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:19:59.617638  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:19:59.642299  164558 cri.go:87] found id: ""
	I0801 23:19:59.642322  164558 logs.go:274] 0 containers: []
	W0801 23:19:59.642327  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:19:59.642333  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:19:59.642406  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:19:59.664851  164558 cri.go:87] found id: ""
	I0801 23:19:59.664873  164558 logs.go:274] 0 containers: []
	W0801 23:19:59.664890  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:19:59.664897  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:19:59.664939  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:19:59.687273  164558 cri.go:87] found id: ""
	I0801 23:19:59.687295  164558 logs.go:274] 0 containers: []
	W0801 23:19:59.687303  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:19:59.687310  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:19:59.687368  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:19:59.709274  164558 cri.go:87] found id: ""
	I0801 23:19:59.709294  164558 logs.go:274] 0 containers: []
	W0801 23:19:59.709302  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:19:59.709310  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:19:59.709360  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:19:59.736817  164558 cri.go:87] found id: ""
	I0801 23:19:59.736844  164558 logs.go:274] 0 containers: []
	W0801 23:19:59.736851  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:19:59.736860  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:19:59.736911  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:19:59.761722  164558 cri.go:87] found id: ""
	I0801 23:19:59.761748  164558 logs.go:274] 0 containers: []
	W0801 23:19:59.761757  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:19:59.761765  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:19:59.761824  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:19:59.785701  164558 cri.go:87] found id: ""
	I0801 23:19:59.785730  164558 logs.go:274] 0 containers: []
	W0801 23:19:59.785740  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:19:59.785745  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:19:59.785797  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:19:59.808152  164558 cri.go:87] found id: ""
	I0801 23:19:59.808173  164558 logs.go:274] 0 containers: []
	W0801 23:19:59.808182  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:19:59.808193  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:19:59.808207  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:19:59.843960  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:19:59.843989  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:19:59.868998  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:19:59.869025  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:19:59.917749  164558 logs.go:138] Found kubelet problem: Aug 01 23:19:59 kubernetes-upgrade-20220801231451-9849 kubelet[6558]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:19:59.962931  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:19:59.962960  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:19:59.977308  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:19:59.977331  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:20:00.027187  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:20:00.027211  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:20:00.027227  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:20:00.027335  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:20:00.027348  164558 out.go:239]   Aug 01 23:19:59 kubernetes-upgrade-20220801231451-9849 kubelet[6558]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:19:59 kubernetes-upgrade-20220801231451-9849 kubelet[6558]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:20:00.027354  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:20:00.027362  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:20:10.028391  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:20:10.117107  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:20:10.117205  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:20:10.141130  164558 cri.go:87] found id: ""
	I0801 23:20:10.141156  164558 logs.go:274] 0 containers: []
	W0801 23:20:10.141165  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:20:10.141174  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:20:10.141223  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:20:10.166996  164558 cri.go:87] found id: ""
	I0801 23:20:10.167024  164558 logs.go:274] 0 containers: []
	W0801 23:20:10.167032  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:20:10.167039  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:20:10.167101  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:20:10.189749  164558 cri.go:87] found id: ""
	I0801 23:20:10.189774  164558 logs.go:274] 0 containers: []
	W0801 23:20:10.189780  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:20:10.189786  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:20:10.189841  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:20:10.214438  164558 cri.go:87] found id: ""
	I0801 23:20:10.214479  164558 logs.go:274] 0 containers: []
	W0801 23:20:10.214488  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:20:10.214496  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:20:10.214551  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:20:10.246777  164558 cri.go:87] found id: ""
	I0801 23:20:10.246805  164558 logs.go:274] 0 containers: []
	W0801 23:20:10.246814  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:20:10.246822  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:20:10.246891  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:20:10.276063  164558 cri.go:87] found id: ""
	I0801 23:20:10.276090  164558 logs.go:274] 0 containers: []
	W0801 23:20:10.276098  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:20:10.276106  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:20:10.276157  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:20:10.300991  164558 cri.go:87] found id: ""
	I0801 23:20:10.301015  164558 logs.go:274] 0 containers: []
	W0801 23:20:10.301022  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:20:10.301028  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:20:10.301072  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:20:10.331805  164558 cri.go:87] found id: ""
	I0801 23:20:10.331825  164558 logs.go:274] 0 containers: []
	W0801 23:20:10.331831  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:20:10.331840  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:20:10.331849  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:20:10.357998  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:20:10.358020  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:20:10.406679  164558 logs.go:138] Found kubelet problem: Aug 01 23:20:10 kubernetes-upgrade-20220801231451-9849 kubelet[6854]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:20:10.453356  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:20:10.453385  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:20:10.469322  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:20:10.469350  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:20:10.524942  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:20:10.524965  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:20:10.524978  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:20:10.570463  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:20:10.570493  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:20:10.570605  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:20:10.570620  164558 out.go:239]   Aug 01 23:20:10 kubernetes-upgrade-20220801231451-9849 kubelet[6854]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:20:10 kubernetes-upgrade-20220801231451-9849 kubelet[6854]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:20:10.570627  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:20:10.570636  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:20:20.571000  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:20:20.616963  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:20:20.617040  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:20:20.640770  164558 cri.go:87] found id: ""
	I0801 23:20:20.640797  164558 logs.go:274] 0 containers: []
	W0801 23:20:20.640805  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:20:20.640814  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:20:20.640868  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:20:20.664848  164558 cri.go:87] found id: ""
	I0801 23:20:20.664871  164558 logs.go:274] 0 containers: []
	W0801 23:20:20.664891  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:20:20.664899  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:20:20.664948  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:20:20.686454  164558 cri.go:87] found id: ""
	I0801 23:20:20.686474  164558 logs.go:274] 0 containers: []
	W0801 23:20:20.686482  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:20:20.686489  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:20:20.686535  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:20:20.708371  164558 cri.go:87] found id: ""
	I0801 23:20:20.708392  164558 logs.go:274] 0 containers: []
	W0801 23:20:20.708400  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:20:20.708408  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:20:20.708451  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:20:20.732928  164558 cri.go:87] found id: ""
	I0801 23:20:20.732953  164558 logs.go:274] 0 containers: []
	W0801 23:20:20.732965  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:20:20.732973  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:20:20.733024  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:20:20.761197  164558 cri.go:87] found id: ""
	I0801 23:20:20.761222  164558 logs.go:274] 0 containers: []
	W0801 23:20:20.761250  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:20:20.761264  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:20:20.761325  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:20:20.787756  164558 cri.go:87] found id: ""
	I0801 23:20:20.787779  164558 logs.go:274] 0 containers: []
	W0801 23:20:20.787787  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:20:20.787795  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:20:20.787848  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:20:20.810435  164558 cri.go:87] found id: ""
	I0801 23:20:20.810455  164558 logs.go:274] 0 containers: []
	W0801 23:20:20.810460  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:20:20.810469  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:20:20.810480  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:20:20.878355  164558 logs.go:138] Found kubelet problem: Aug 01 23:20:20 kubernetes-upgrade-20220801231451-9849 kubelet[7151]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:20:20.924070  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:20:20.924098  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:20:20.938134  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:20:20.938156  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:20:20.987643  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0801 23:20:20.987677  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:20:20.987693  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:20:21.023918  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:20:21.023948  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:20:21.053518  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:20:21.053552  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0801 23:20:21.053678  164558 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0801 23:20:21.053699  164558 out.go:239]   Aug 01 23:20:20 kubernetes-upgrade-20220801231451-9849 kubelet[7151]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	  Aug 01 23:20:20 kubernetes-upgrade-20220801231451-9849 kubelet[7151]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:20:21.053705  164558 out.go:309] Setting ErrFile to fd 2...
	I0801 23:20:21.053712  164558 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:20:31.055095  164558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:20:31.064367  164558 kubeadm.go:630] restartCluster took 4m1.517408415s
	W0801 23:20:31.064527  164558 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0801 23:20:31.064563  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0801 23:20:31.849725  164558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 23:20:31.863693  164558 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 23:20:31.873262  164558 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 23:20:31.873320  164558 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 23:20:31.880540  164558 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 23:20:31.880578  164558 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 23:20:32.182658  164558 out.go:204]   - Generating certificates and keys ...
	I0801 23:20:33.382383  164558 out.go:204]   - Booting up control plane ...
	W0801 23:22:28.397703  164558 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0801 23:20:31.912492    7631 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0801 23:20:31.912492    7631 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0801 23:22:28.397765  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0801 23:22:29.096422  164558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 23:22:29.105994  164558 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 23:22:29.106042  164558 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 23:22:29.112733  164558 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 23:22:29.112770  164558 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 23:24:25.028729  164558 out.go:204]   - Generating certificates and keys ...
	I0801 23:24:25.031634  164558 out.go:204]   - Booting up control plane ...
	I0801 23:24:25.033986  164558 kubeadm.go:397] StartCluster complete in 7m55.520543761s
	I0801 23:24:25.034044  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:24:25.034102  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:24:25.056715  164558 cri.go:87] found id: ""
	I0801 23:24:25.056736  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.056744  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:24:25.056751  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:24:25.056807  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:24:25.078680  164558 cri.go:87] found id: ""
	I0801 23:24:25.078704  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.078709  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:24:25.078715  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:24:25.078771  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:24:25.101528  164558 cri.go:87] found id: ""
	I0801 23:24:25.101553  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.101561  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:24:25.101569  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:24:25.101619  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:24:25.126099  164558 cri.go:87] found id: ""
	I0801 23:24:25.126126  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.126133  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:24:25.126142  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:24:25.126200  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:24:25.151037  164558 cri.go:87] found id: ""
	I0801 23:24:25.151067  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.151076  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:24:25.151084  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:24:25.151140  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:24:25.173426  164558 cri.go:87] found id: ""
	I0801 23:24:25.173452  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.173461  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:24:25.173469  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:24:25.173518  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:24:25.195606  164558 cri.go:87] found id: ""
	I0801 23:24:25.195633  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.195640  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:24:25.195648  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:24:25.195704  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:24:25.219819  164558 cri.go:87] found id: ""
	I0801 23:24:25.219840  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.219846  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:24:25.219856  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:24:25.219865  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:24:25.265000  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:24:25.265031  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:24:25.289751  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:24:25.289777  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:24:25.340398  164558 logs.go:138] Found kubelet problem: Aug 01 23:24:24 kubernetes-upgrade-20220801231451-9849 kubelet[11658]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:24:25.385968  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:24:25.386006  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:24:25.402731  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:24:25.402760  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:24:25.450999  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0801 23:24:25.451038  164558 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0801 23:22:29.159019    9767 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0801 23:24:25.451072  164558 out.go:239] * 
	* 
	W0801 23:24:25.451303  164558 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0801 23:22:29.159019    9767 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0801 23:22:29.159019    9767 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 23:24:25.451337  164558 out.go:239] * 
	* 
	W0801 23:24:25.452624  164558 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0801 23:24:25.455427  164558 out.go:177] X Problems detected in kubelet:
	I0801 23:24:25.456749  164558 out.go:177]   Aug 01 23:24:24 kubernetes-upgrade-20220801231451-9849 kubelet[11658]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:24:25.460294  164558 out.go:177] 
	W0801 23:24:25.461628  164558 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0801 23:22:29.159019    9767 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0801 23:22:29.159019    9767 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 23:24:25.461788  164558 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0801 23:24:25.461866  164558 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0801 23:24:25.464073  164558 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220801231451-9849 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220801231451-9849 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220801231451-9849 version --output=json: exit status 1 (39.986037ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "24",
	    "gitVersion": "v1.24.3",
	    "gitCommit": "aef86a93758dc3cb2c658dd9657ab4ad4afc21cb",
	    "gitTreeState": "clean",
	    "buildDate": "2022-07-13T14:30:46Z",
	    "goVersion": "go1.18.3",
	    "compiler": "gc",
	    "platform": "linux/amd64"
	  },
	  "kustomizeVersion": "v4.5.4"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.67.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:482: *** TestKubernetesUpgrade FAILED at 2022-08-01 23:24:25.623924699 +0000 UTC m=+2345.044501241
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220801231451-9849
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220801231451-9849:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6626503d2177f2c9fcf45a8eeef96ae745200976867dd2bbc92d2aebda6cb787",
	        "Created": "2022-08-01T23:15:10.919686786Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 165488,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-08-01T23:15:53.113967796Z",
	            "FinishedAt": "2022-08-01T23:15:49.989498491Z"
	        },
	        "Image": "sha256:b7ab23e982777465b97377a568e561067cf1b32ea520e8cd32f5ec0b95d538ab",
	        "ResolvConfPath": "/var/lib/docker/containers/6626503d2177f2c9fcf45a8eeef96ae745200976867dd2bbc92d2aebda6cb787/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6626503d2177f2c9fcf45a8eeef96ae745200976867dd2bbc92d2aebda6cb787/hostname",
	        "HostsPath": "/var/lib/docker/containers/6626503d2177f2c9fcf45a8eeef96ae745200976867dd2bbc92d2aebda6cb787/hosts",
	        "LogPath": "/var/lib/docker/containers/6626503d2177f2c9fcf45a8eeef96ae745200976867dd2bbc92d2aebda6cb787/6626503d2177f2c9fcf45a8eeef96ae745200976867dd2bbc92d2aebda6cb787-json.log",
	        "Name": "/kubernetes-upgrade-20220801231451-9849",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220801231451-9849:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220801231451-9849",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2bd92f0720025720a48942fa371c05a9792dca32aa4affc6e3e903b9e64cf607-init/diff:/var/lib/docker/overlay2/2efd3eb39bc3c661508b254a9ed801afae08af6e315e86101e64d94d6f0e9287/diff:/var/lib/docker/overlay2/5886584ff4433f673eb390b4d40f56e8273d66755956bd663c1963fb884c4c91/diff:/var/lib/docker/overlay2/48e5e7a30bafd4291c7663f9c33b4d0c86596ed4fcd9cba185960b3053167567/diff:/var/lib/docker/overlay2/463dbc36419623d2d416491c661c4612c3d42d326cecafae0a95bf5cca9c5b47/diff:/var/lib/docker/overlay2/f62eb8f2cc18951ea906d6851007bb4d9160ba7ec86a63cb1f81a3acc01cca23/diff:/var/lib/docker/overlay2/22a20a99ab7e0eed0e313294bbac0e4f0753140b4ce9a6bc4206e2db98ad045a/diff:/var/lib/docker/overlay2/d31f6ae94e0460e8d8a98683e59dfd2d4db3eccda055748f45596b1cdbac84e5/diff:/var/lib/docker/overlay2/e7a5515982bd99762af5e1379d203014ecdf6fc1759103d8d0ebd63a8f292adb/diff:/var/lib/docker/overlay2/d64b3e8c0be1eef075f275243cb66eb5596a10b13afecb8c1af49ad11f7a5735/diff:/var/lib/docker/overlay2/3bfd98
bb50c34197652f18a75996afb7a98cb5a627ff52d93d4a4c3d1fcff3b9/diff:/var/lib/docker/overlay2/c7645f88036aeb4a3c30796c34e513ee8460ddc7026acfc86913b2e099e3afc5/diff:/var/lib/docker/overlay2/0a1ef3b188652730ca6307ea3060e889fba5c80d74ffcfedc0ff9c463ba2ec33/diff:/var/lib/docker/overlay2/5f3a9f838c5989e11e473f6ec99069ea407122896961dcc848dbf4beef4d00ab/diff:/var/lib/docker/overlay2/f3f734e83fa4bbabbd9132229b702bc79a2c3f28dd8ce91209252bf540a1d927/diff:/var/lib/docker/overlay2/ceeea3688b416e4828b3b6f4ed65c618d421f2e1e10743ecb0d708d508eea61b/diff:/var/lib/docker/overlay2/bf2092d31caf041524a1cd19e07f9b0c9f979994715628a054968189bd296847/diff:/var/lib/docker/overlay2/ce6c81a2e87b0d005b8e552e219c4fc589b2d8afb55c06b942e7748372966477/diff:/var/lib/docker/overlay2/6c7ab3ca7571ff6b15032776d591cde10230c2a596df507c181d608139779851/diff:/var/lib/docker/overlay2/a8ec35fe74f9a06f5af490ea693a28cbe9c631d15267a4a67694bd777dd9f1bb/diff:/var/lib/docker/overlay2/b1ba9aa745d01e1ebda017258b1349a4a7ab57d480985b935cd8c7acb1a8b80a/diff:/var/lib/d
ocker/overlay2/0e021e2e9c8535eebf5c756edf8a2ac1af077aa5c0eb4c77f815607fc190bd61/diff:/var/lib/docker/overlay2/39297a0fa0ef29cc3e1c226a3a3cadc55f8e4d01400047f56594938150975aff/diff:/var/lib/docker/overlay2/91b14da3e6d24b1e628dc66e9fa3569191483094ab8fbaf946404c5756f8277a/diff:/var/lib/docker/overlay2/3e58d5618279084b1982e349fc8952825d293ea809bbc0fecdb88a34abf40654/diff:/var/lib/docker/overlay2/9d6208d08a4966ac8bd9f98c2fb8a8e65b59ce2d1c18b4d80ffb7c9a7773ff3d/diff:/var/lib/docker/overlay2/d0f5f4e2425828429724811556fba86308e7c0cfe56b28c42fe573e46a57fda9/diff:/var/lib/docker/overlay2/590ff6ad50552be8795b03999f427e637cc0941155eab6acf0b5fd7e5a3d3218/diff:/var/lib/docker/overlay2/b1900570c632a8eaeccd9f868593d433bab8bcb5b1aba18439d345db1d6fccce/diff:/var/lib/docker/overlay2/ea7212c7a8a24295bae44b02f809930885088e84022d60f1635ed69bbd496164/diff:/var/lib/docker/overlay2/af9669ff7dc83d3a0eb4f574d343c6c64179febd45485face0e17339164aed3c/diff:/var/lib/docker/overlay2/43082c8d6d169ca0a3d4d8d6c8ec8445409af71b67ef482eda46000cf10
62695/diff:/var/lib/docker/overlay2/1d507bc71d90e2f44a54bb93c31d17871ea9aa41b79de3747ad245173881fa49/diff:/var/lib/docker/overlay2/200fe7328edb48709de34617cd778fbe2ce86fd9fd3e9d9973548483c4f93f4d/diff:/var/lib/docker/overlay2/f2a29bb66961809e3564557a9586825ffe2a78983eb9f2095914a82f5e52cb67/diff:/var/lib/docker/overlay2/744eab14b2c2392745014b78efd37807f98946dfdd0a73f9e9ae1053523e36e1/diff:/var/lib/docker/overlay2/e509f3a8c16eba39f983c8f2f4706c5a840b572608304b18e022b7b5ec02ac41/diff:/var/lib/docker/overlay2/03d6276c178230b277f2bc863bd5431de3fa5ada6de2918a3bd38c33a0428c82/diff:/var/lib/docker/overlay2/bafa54ff0d513de6a30e58794d786f5ed96ef90796b1bd9b29a52ea337eb22a7/diff:/var/lib/docker/overlay2/da2f3bd29ba7afc6877a4635a1155f631c31862b11cfaa78935c03ea5ab3bda7/diff:/var/lib/docker/overlay2/b650b71c8a6e9d2e455484880e11e5c4b055c3e374e52bc7770b14bedce0cc38/diff:/var/lib/docker/overlay2/1b15603798e707f654cde58910aa96c4f0196f8511db6bd90c8380851c931fd7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2bd92f0720025720a48942fa371c05a9792dca32aa4affc6e3e903b9e64cf607/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2bd92f0720025720a48942fa371c05a9792dca32aa4affc6e3e903b9e64cf607/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2bd92f0720025720a48942fa371c05a9792dca32aa4affc6e3e903b9e64cf607/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220801231451-9849",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220801231451-9849/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220801231451-9849",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220801231451-9849",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220801231451-9849",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aaa61130e15bdb33f5d8b271969668f6f192847d7dcc3fb24d0b693a762c5309",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49337"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49336"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49333"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49335"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49334"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/aaa61130e15b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220801231451-9849": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6626503d2177",
	                        "kubernetes-upgrade-20220801231451-9849"
	                    ],
	                    "NetworkID": "a40a3e8982a8d793445f61ae8430ec4eafc9ba408689bea9f70432f5108b7c44",
	                    "EndpointID": "ef3206f0787571e74445acf1569dba13f7614e3a7820d207a1ac37503a2e6b8e",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220801231451-9849 -n kubernetes-upgrade-20220801231451-9849
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220801231451-9849 -n kubernetes-upgrade-20220801231451-9849: exit status 2 (375.642455ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220801231451-9849 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | cert-options-20220801231704-9849       | jenkins | v1.26.0 | 01 Aug 22 23:17 UTC | 01 Aug 22 23:17 UTC |
	|         | cert-options-20220801231704-9849                  |                                        |         |         |                     |                     |
	|         | --memory=2048                                     |                                        |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                         |                                        |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                     |                                        |         |         |                     |                     |
	|         | --apiserver-names=localhost                       |                                        |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                  |                                        |         |         |                     |                     |
	|         | --apiserver-port=8555                             |                                        |         |         |                     |                     |
	|         | --driver=docker                                   |                                        |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |         |                     |                     |
	| delete  | -p                                                | missing-upgrade-20220801231444-9849    | jenkins | v1.26.0 | 01 Aug 22 23:17 UTC | 01 Aug 22 23:17 UTC |
	|         | missing-upgrade-20220801231444-9849               |                                        |         |         |                     |                     |
	| start   | -p                                                | force-systemd-flag-20220801231709-9849 | jenkins | v1.26.0 | 01 Aug 22 23:17 UTC | 01 Aug 22 23:17 UTC |
	|         | force-systemd-flag-20220801231709-9849            |                                        |         |         |                     |                     |
	|         | --memory=2048 --force-systemd                     |                                        |         |         |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker            |                                        |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |                                        |         |         |                     |                     |
	| ssh     | cert-options-20220801231704-9849                  | cert-options-20220801231704-9849       | jenkins | v1.26.0 | 01 Aug 22 23:17 UTC | 01 Aug 22 23:17 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                        |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                        |         |         |                     |                     |
	| ssh     | -p                                                | cert-options-20220801231704-9849       | jenkins | v1.26.0 | 01 Aug 22 23:17 UTC | 01 Aug 22 23:17 UTC |
	|         | cert-options-20220801231704-9849                  |                                        |         |         |                     |                     |
	|         | -- sudo cat                                       |                                        |         |         |                     |                     |
	|         | /etc/kubernetes/admin.conf                        |                                        |         |         |                     |                     |
	| delete  | -p                                                | cert-options-20220801231704-9849       | jenkins | v1.26.0 | 01 Aug 22 23:17 UTC | 01 Aug 22 23:17 UTC |
	|         | cert-options-20220801231704-9849                  |                                        |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220801231735-9849    | jenkins | v1.26.0 | 01 Aug 22 23:17 UTC | 01 Aug 22 23:19 UTC |
	|         | old-k8s-version-20220801231735-9849               |                                        |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                        |         |         |                     |                     |
	|         | --keep-context=false                              |                                        |         |         |                     |                     |
	|         | --driver=docker                                   |                                        |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                        |         |         |                     |                     |
	| ssh     | force-systemd-flag-20220801231709-9849            | force-systemd-flag-20220801231709-9849 | jenkins | v1.26.0 | 01 Aug 22 23:17 UTC | 01 Aug 22 23:17 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                        |         |         |                     |                     |
	| delete  | -p                                                | force-systemd-flag-20220801231709-9849 | jenkins | v1.26.0 | 01 Aug 22 23:17 UTC | 01 Aug 22 23:17 UTC |
	|         | force-systemd-flag-20220801231709-9849            |                                        |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801231743-9849         | jenkins | v1.26.0 | 01 Aug 22 23:17 UTC | 01 Aug 22 23:18 UTC |
	|         | no-preload-20220801231743-9849                    |                                        |         |         |                     |                     |
	|         | --memory=2200                                     |                                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                        |         |         |                     |                     |
	|         | --driver=docker                                   |                                        |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                        |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220801231743-9849         | jenkins | v1.26.0 | 01 Aug 22 23:18 UTC | 01 Aug 22 23:18 UTC |
	|         | no-preload-20220801231743-9849                    |                                        |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220801231743-9849         | jenkins | v1.26.0 | 01 Aug 22 23:18 UTC | 01 Aug 22 23:19 UTC |
	|         | no-preload-20220801231743-9849                    |                                        |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                        |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220801231743-9849         | jenkins | v1.26.0 | 01 Aug 22 23:19 UTC | 01 Aug 22 23:19 UTC |
	|         | no-preload-20220801231743-9849                    |                                        |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220801231743-9849         | jenkins | v1.26.0 | 01 Aug 22 23:19 UTC | 01 Aug 22 23:24 UTC |
	|         | no-preload-20220801231743-9849                    |                                        |         |         |                     |                     |
	|         | --memory=2200                                     |                                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                        |         |         |                     |                     |
	|         | --driver=docker                                   |                                        |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                        |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220801231735-9849    | jenkins | v1.26.0 | 01 Aug 22 23:19 UTC | 01 Aug 22 23:19 UTC |
	|         | old-k8s-version-20220801231735-9849               |                                        |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220801231735-9849    | jenkins | v1.26.0 | 01 Aug 22 23:19 UTC | 01 Aug 22 23:20 UTC |
	|         | old-k8s-version-20220801231735-9849               |                                        |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                        |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220801231735-9849    | jenkins | v1.26.0 | 01 Aug 22 23:20 UTC | 01 Aug 22 23:20 UTC |
	|         | old-k8s-version-20220801231735-9849               |                                        |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220801231735-9849    | jenkins | v1.26.0 | 01 Aug 22 23:20 UTC |                     |
	|         | old-k8s-version-20220801231735-9849               |                                        |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                        |         |         |                     |                     |
	|         | --keep-context=false                              |                                        |         |         |                     |                     |
	|         | --driver=docker                                   |                                        |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                        |         |         |                     |                     |
	| start   | -p                                                | cert-expiration-20220801231640-9849    | jenkins | v1.26.0 | 01 Aug 22 23:20 UTC | 01 Aug 22 23:20 UTC |
	|         | cert-expiration-20220801231640-9849               |                                        |         |         |                     |                     |
	|         | --memory=2048                                     |                                        |         |         |                     |                     |
	|         | --cert-expiration=8760h                           |                                        |         |         |                     |                     |
	|         | --driver=docker                                   |                                        |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |         |                     |                     |
	| delete  | -p                                                | cert-expiration-20220801231640-9849    | jenkins | v1.26.0 | 01 Aug 22 23:20 UTC | 01 Aug 22 23:20 UTC |
	|         | cert-expiration-20220801231640-9849               |                                        |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220801232037-9849        | jenkins | v1.26.0 | 01 Aug 22 23:20 UTC | 01 Aug 22 23:21 UTC |
	|         | embed-certs-20220801232037-9849                   |                                        |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                        |         |         |                     |                     |
	|         | --driver=docker                                   |                                        |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                        |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220801232037-9849        | jenkins | v1.26.0 | 01 Aug 22 23:21 UTC | 01 Aug 22 23:21 UTC |
	|         | embed-certs-20220801232037-9849                   |                                        |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                        |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220801232037-9849        | jenkins | v1.26.0 | 01 Aug 22 23:21 UTC | 01 Aug 22 23:22 UTC |
	|         | embed-certs-20220801232037-9849                   |                                        |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                        |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220801232037-9849        | jenkins | v1.26.0 | 01 Aug 22 23:22 UTC | 01 Aug 22 23:22 UTC |
	|         | embed-certs-20220801232037-9849                   |                                        |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                        |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220801232037-9849        | jenkins | v1.26.0 | 01 Aug 22 23:22 UTC |                     |
	|         | embed-certs-20220801232037-9849                   |                                        |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                        |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                        |         |         |                     |                     |
	|         | --driver=docker                                   |                                        |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                        |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 23:22:03
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 23:22:03.112166  215236 out.go:296] Setting OutFile to fd 1 ...
	I0801 23:22:03.112279  215236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:22:03.112293  215236 out.go:309] Setting ErrFile to fd 2...
	I0801 23:22:03.112298  215236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:22:03.112407  215236 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 23:22:03.112983  215236 out.go:303] Setting JSON to false
	I0801 23:22:03.114834  215236 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3873,"bootTime":1659392250,"procs":849,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0801 23:22:03.114917  215236 start.go:125] virtualization: kvm guest
	I0801 23:22:03.117568  215236 out.go:177] * [embed-certs-20220801232037-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0801 23:22:03.119022  215236 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 23:22:03.119027  215236 notify.go:193] Checking for updates...
	I0801 23:22:03.121254  215236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 23:22:03.122793  215236 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 23:22:03.124086  215236 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 23:22:03.125663  215236 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0801 23:22:03.127551  215236 config.go:180] Loaded profile config "embed-certs-20220801232037-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 23:22:03.128108  215236 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 23:22:03.169258  215236 docker.go:137] docker version: linux-20.10.17
	I0801 23:22:03.169379  215236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 23:22:03.276468  215236 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-08-01 23:22:03.200829472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 23:22:03.276565  215236 docker.go:254] overlay module found
	I0801 23:22:03.278852  215236 out.go:177] * Using the docker driver based on existing profile
	I0801 23:22:03.280219  215236 start.go:284] selected driver: docker
	I0801 23:22:03.280234  215236 start.go:808] validating driver "docker" against &{Name:embed-certs-20220801232037-9849 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220801232037-9849 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 23:22:03.280343  215236 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 23:22:03.281143  215236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 23:22:03.389562  215236 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:57 SystemTime:2022-08-01 23:22:03.311009551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 23:22:03.389848  215236 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 23:22:03.389885  215236 cni.go:95] Creating CNI manager for ""
	I0801 23:22:03.389893  215236 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0801 23:22:03.389920  215236 start_flags.go:310] config:
	{Name:embed-certs-20220801232037-9849 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220801232037-9849 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 23:22:03.392266  215236 out.go:177] * Starting control plane node embed-certs-20220801232037-9849 in cluster embed-certs-20220801232037-9849
	I0801 23:22:03.393555  215236 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0801 23:22:03.394772  215236 out.go:177] * Pulling base image ...
	I0801 23:22:03.396008  215236 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0801 23:22:03.396047  215236 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4
	I0801 23:22:03.396058  215236 cache.go:57] Caching tarball of preloaded images
	I0801 23:22:03.396121  215236 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 23:22:03.396242  215236 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 23:22:03.396261  215236 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on containerd
	I0801 23:22:03.396378  215236 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/embed-certs-20220801232037-9849/config.json ...
	I0801 23:22:03.431272  215236 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 23:22:03.431301  215236 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 23:22:03.431336  215236 cache.go:208] Successfully downloaded all kic artifacts
	I0801 23:22:03.431366  215236 start.go:371] acquiring machines lock for embed-certs-20220801232037-9849: {Name:mk9b0ee60b878e723f9e0cfb876f3e9e1de844fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 23:22:03.431458  215236 start.go:375] acquired machines lock for "embed-certs-20220801232037-9849" in 69.729µs
	I0801 23:22:03.431478  215236 start.go:95] Skipping create...Using existing machine configuration
	I0801 23:22:03.431483  215236 fix.go:55] fixHost starting: 
	I0801 23:22:03.431701  215236 cli_runner.go:164] Run: docker container inspect embed-certs-20220801232037-9849 --format={{.State.Status}}
	I0801 23:22:03.464613  215236 fix.go:103] recreateIfNeeded on embed-certs-20220801232037-9849: state=Stopped err=<nil>
	W0801 23:22:03.464641  215236 fix.go:129] unexpected machine state, will restart: <nil>
	I0801 23:22:03.467068  215236 out.go:177] * Restarting existing docker container for "embed-certs-20220801232037-9849" ...
	I0801 23:22:00.770525  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:02.771997  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:01.186055  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:03.186519  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:03.468462  215236 cli_runner.go:164] Run: docker start embed-certs-20220801232037-9849
	I0801 23:22:03.874882  215236 cli_runner.go:164] Run: docker container inspect embed-certs-20220801232037-9849 --format={{.State.Status}}
	I0801 23:22:03.912257  215236 kic.go:415] container "embed-certs-20220801232037-9849" state is running.
	I0801 23:22:03.912668  215236 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220801232037-9849
	I0801 23:22:03.948089  215236 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/embed-certs-20220801232037-9849/config.json ...
	I0801 23:22:03.948306  215236 machine.go:88] provisioning docker machine ...
	I0801 23:22:03.948334  215236 ubuntu.go:169] provisioning hostname "embed-certs-20220801232037-9849"
	I0801 23:22:03.948380  215236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801232037-9849
	I0801 23:22:03.984462  215236 main.go:134] libmachine: Using SSH client type: native
	I0801 23:22:03.984693  215236 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil>  [] 0s} 127.0.0.1 49392 <nil> <nil>}
	I0801 23:22:03.984726  215236 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220801232037-9849 && echo "embed-certs-20220801232037-9849" | sudo tee /etc/hostname
	I0801 23:22:03.985441  215236 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59072->127.0.0.1:49392: read: connection reset by peer
	I0801 23:22:07.113972  215236 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220801232037-9849
	
	I0801 23:22:07.114057  215236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801232037-9849
	I0801 23:22:07.149266  215236 main.go:134] libmachine: Using SSH client type: native
	I0801 23:22:07.149412  215236 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil>  [] 0s} 127.0.0.1 49392 <nil> <nil>}
	I0801 23:22:07.149432  215236 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220801232037-9849' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220801232037-9849/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220801232037-9849' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 23:22:07.262224  215236 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 23:22:07.262257  215236 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 23:22:07.262302  215236 ubuntu.go:177] setting up certificates
	I0801 23:22:07.262314  215236 provision.go:83] configureAuth start
	I0801 23:22:07.262401  215236 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220801232037-9849
	I0801 23:22:07.296827  215236 provision.go:138] copyHostCerts
	I0801 23:22:07.296889  215236 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 23:22:07.296904  215236 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 23:22:07.296972  215236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 23:22:07.297064  215236 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 23:22:07.297078  215236 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 23:22:07.297117  215236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1675 bytes)
	I0801 23:22:07.297197  215236 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 23:22:07.297209  215236 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 23:22:07.297251  215236 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 23:22:07.297307  215236 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220801232037-9849 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220801232037-9849]
	I0801 23:22:07.860797  215236 provision.go:172] copyRemoteCerts
	I0801 23:22:07.860857  215236 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 23:22:07.860905  215236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801232037-9849
	I0801 23:22:07.896018  215236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/embed-certs-20220801232037-9849/id_rsa Username:docker}
	I0801 23:22:07.982224  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 23:22:08.001395  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0801 23:22:08.019734  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0801 23:22:08.036347  215236 provision.go:86] duration metric: configureAuth took 774.017243ms
	I0801 23:22:08.036371  215236 ubuntu.go:193] setting minikube options for container-runtime
	I0801 23:22:08.036543  215236 config.go:180] Loaded profile config "embed-certs-20220801232037-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 23:22:08.036556  215236 machine.go:91] provisioned docker machine in 4.088235779s
	I0801 23:22:08.036563  215236 start.go:307] post-start starting for "embed-certs-20220801232037-9849" (driver="docker")
	I0801 23:22:08.036571  215236 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 23:22:08.036610  215236 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 23:22:08.036642  215236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801232037-9849
	I0801 23:22:08.072739  215236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/embed-certs-20220801232037-9849/id_rsa Username:docker}
	I0801 23:22:05.271157  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:07.770044  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:05.686581  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:08.186225  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:10.186453  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:08.157554  215236 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 23:22:08.160362  215236 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 23:22:08.160399  215236 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 23:22:08.160413  215236 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 23:22:08.160423  215236 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 23:22:08.160438  215236 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 23:22:08.160772  215236 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 23:22:08.160877  215236 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/98492.pem -> 98492.pem in /etc/ssl/certs
	I0801 23:22:08.160988  215236 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 23:22:08.168263  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/98492.pem --> /etc/ssl/certs/98492.pem (1708 bytes)
	I0801 23:22:08.185815  215236 start.go:310] post-start completed in 149.239394ms
	I0801 23:22:08.185891  215236 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 23:22:08.185941  215236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801232037-9849
	I0801 23:22:08.220179  215236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/embed-certs-20220801232037-9849/id_rsa Username:docker}
	I0801 23:22:08.298524  215236 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 23:22:08.302138  215236 fix.go:57] fixHost completed within 4.870650003s
	I0801 23:22:08.302159  215236 start.go:82] releasing machines lock for "embed-certs-20220801232037-9849", held for 4.870689207s
	I0801 23:22:08.302238  215236 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220801232037-9849
	I0801 23:22:08.335695  215236 ssh_runner.go:195] Run: systemctl --version
	I0801 23:22:08.335750  215236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801232037-9849
	I0801 23:22:08.335766  215236 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 23:22:08.335839  215236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220801232037-9849
	I0801 23:22:08.373053  215236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/embed-certs-20220801232037-9849/id_rsa Username:docker}
	I0801 23:22:08.373226  215236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/embed-certs-20220801232037-9849/id_rsa Username:docker}
	I0801 23:22:08.479709  215236 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0801 23:22:08.490747  215236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 23:22:08.499960  215236 docker.go:188] disabling docker service ...
	I0801 23:22:08.500004  215236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0801 23:22:08.509283  215236 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0801 23:22:08.517516  215236 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0801 23:22:08.595515  215236 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0801 23:22:08.674831  215236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0801 23:22:08.684096  215236 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 23:22:08.696884  215236 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0801 23:22:08.704315  215236 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0801 23:22:08.711721  215236 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0801 23:22:08.719165  215236 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0801 23:22:08.726490  215236 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0801 23:22:08.734377  215236 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0801 23:22:08.747796  215236 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0801 23:22:08.754750  215236 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0801 23:22:08.761867  215236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 23:22:08.859815  215236 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0801 23:22:08.931153  215236 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0801 23:22:08.931233  215236 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0801 23:22:08.934528  215236 start.go:471] Will wait 60s for crictl version
	I0801 23:22:08.934574  215236 ssh_runner.go:195] Run: sudo crictl version
	I0801 23:22:08.960441  215236 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-08-01T23:22:08Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0801 23:22:10.270738  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:12.770873  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:12.687694  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:15.186300  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:15.270832  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:17.271343  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:19.769785  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:20.008413  215236 ssh_runner.go:195] Run: sudo crictl version
	I0801 23:22:20.031258  215236 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0801 23:22:20.031307  215236 ssh_runner.go:195] Run: containerd --version
	I0801 23:22:20.062069  215236 ssh_runner.go:195] Run: containerd --version
	I0801 23:22:20.093974  215236 out.go:177] * Preparing Kubernetes v1.24.3 on containerd 1.6.6 ...
	I0801 23:22:17.686084  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:19.686199  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:20.095492  215236 cli_runner.go:164] Run: docker network inspect embed-certs-20220801232037-9849 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0801 23:22:20.127563  215236 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0801 23:22:20.130777  215236 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 23:22:20.139951  215236 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0801 23:22:20.140006  215236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0801 23:22:20.163369  215236 containerd.go:547] all images are preloaded for containerd runtime.
	I0801 23:22:20.163388  215236 containerd.go:461] Images already preloaded, skipping extraction
	I0801 23:22:20.163434  215236 ssh_runner.go:195] Run: sudo crictl images --output json
	I0801 23:22:20.186642  215236 containerd.go:547] all images are preloaded for containerd runtime.
	I0801 23:22:20.186667  215236 cache_images.go:84] Images are preloaded, skipping loading
	I0801 23:22:20.186710  215236 ssh_runner.go:195] Run: sudo crictl info
	I0801 23:22:20.209905  215236 cni.go:95] Creating CNI manager for ""
	I0801 23:22:20.209927  215236 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0801 23:22:20.209939  215236 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 23:22:20.209951  215236 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220801232037-9849 NodeName:embed-certs-20220801232037-9849 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 23:22:20.210062  215236 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220801232037-9849"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 23:22:20.210140  215236 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220801232037-9849 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220801232037-9849 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0801 23:22:20.210185  215236 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 23:22:20.217033  215236 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 23:22:20.217083  215236 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 23:22:20.223709  215236 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (524 bytes)
	I0801 23:22:20.236190  215236 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 23:22:20.248202  215236 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0801 23:22:20.260177  215236 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0801 23:22:20.262852  215236 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 23:22:20.272077  215236 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/embed-certs-20220801232037-9849 for IP: 192.168.94.2
	I0801 23:22:20.272169  215236 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 23:22:20.272222  215236 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 23:22:20.272302  215236 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/embed-certs-20220801232037-9849/client.key
	I0801 23:22:20.272377  215236 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/embed-certs-20220801232037-9849/apiserver.key.ad8e880a
	I0801 23:22:20.272436  215236 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/embed-certs-20220801232037-9849/proxy-client.key
	I0801 23:22:20.272542  215236 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/9849.pem (1338 bytes)
	W0801 23:22:20.272584  215236 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/9849_empty.pem, impossibly tiny 0 bytes
	I0801 23:22:20.272601  215236 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 23:22:20.272636  215236 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 23:22:20.272669  215236 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 23:22:20.272705  215236 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1675 bytes)
	I0801 23:22:20.272761  215236 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/98492.pem (1708 bytes)
	I0801 23:22:20.273280  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/embed-certs-20220801232037-9849/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 23:22:20.289996  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/embed-certs-20220801232037-9849/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0801 23:22:20.306440  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/embed-certs-20220801232037-9849/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 23:22:20.321960  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/embed-certs-20220801232037-9849/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0801 23:22:20.337802  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 23:22:20.353232  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0801 23:22:20.368998  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 23:22:20.384809  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0801 23:22:20.400608  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/98492.pem --> /usr/share/ca-certificates/98492.pem (1708 bytes)
	I0801 23:22:20.416499  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 23:22:20.432370  215236 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/9849.pem --> /usr/share/ca-certificates/9849.pem (1338 bytes)
	I0801 23:22:20.448357  215236 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 23:22:20.460616  215236 ssh_runner.go:195] Run: openssl version
	I0801 23:22:20.465093  215236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9849.pem && ln -fs /usr/share/ca-certificates/9849.pem /etc/ssl/certs/9849.pem"
	I0801 23:22:20.472452  215236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9849.pem
	I0801 23:22:20.475273  215236 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 22:50 /usr/share/ca-certificates/9849.pem
	I0801 23:22:20.475311  215236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9849.pem
	I0801 23:22:20.479700  215236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9849.pem /etc/ssl/certs/51391683.0"
	I0801 23:22:20.485876  215236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98492.pem && ln -fs /usr/share/ca-certificates/98492.pem /etc/ssl/certs/98492.pem"
	I0801 23:22:20.492688  215236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98492.pem
	I0801 23:22:20.495595  215236 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 22:50 /usr/share/ca-certificates/98492.pem
	I0801 23:22:20.495639  215236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98492.pem
	I0801 23:22:20.499993  215236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98492.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 23:22:20.506050  215236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 23:22:20.512666  215236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 23:22:20.515418  215236 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0801 23:22:20.515454  215236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 23:22:20.519792  215236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 23:22:20.526157  215236 kubeadm.go:395] StartCluster: {Name:embed-certs-20220801232037-9849 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220801232037-9849 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<ni
l> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 23:22:20.526260  215236 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0801 23:22:20.526291  215236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0801 23:22:20.549182  215236 cri.go:87] found id: "94713f1791d832acf1c51396d9a8f9163f013511390c8cef36943008525ac293"
	I0801 23:22:20.549203  215236 cri.go:87] found id: "231cca998226aa1f02ba4a1ee285230c4e738ad37d614d5f2acd01d3e9b99589"
	I0801 23:22:20.549210  215236 cri.go:87] found id: "e1a3d20633a2fd862971253b1d46b80bfe33945caa2a47f731b396643661f31b"
	I0801 23:22:20.549216  215236 cri.go:87] found id: "75f19e2f62904ac8eda43316b17a77472055ab0001d8f09d8b27e5325b089d3d"
	I0801 23:22:20.549222  215236 cri.go:87] found id: "53dcfd989b18bc7f492c881d4bbe288ab2e9af52f7e7b50a4524d6c682364cee"
	I0801 23:22:20.549228  215236 cri.go:87] found id: "d1ca79121c5ba62bff354578943a8945e29314ec1e2613b874cccdd5bf78fee7"
	I0801 23:22:20.549233  215236 cri.go:87] found id: "050dc79811e13b31f20da6a64cf38f2ba16a1a5650f42904374ab914e85b1806"
	I0801 23:22:20.549242  215236 cri.go:87] found id: "a5338db9b2cb6025e3b902ae37c4456020e3dfb3bd6e56e85c02016fc8fff09c"
	I0801 23:22:20.549255  215236 cri.go:87] found id: ""
	I0801 23:22:20.549293  215236 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0801 23:22:20.560118  215236 cri.go:114] JSON = null
	W0801 23:22:20.560154  215236 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0801 23:22:20.560208  215236 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 23:22:20.566371  215236 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0801 23:22:20.566389  215236 kubeadm.go:626] restartCluster start
	I0801 23:22:20.566421  215236 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0801 23:22:20.572375  215236 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:20.573270  215236 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220801232037-9849" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 23:22:20.573716  215236 kubeconfig.go:127] "embed-certs-20220801232037-9849" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig - will repair!
	I0801 23:22:20.574303  215236 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mk908131de2da31ada6455cebc27e25fe21e4ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 23:22:20.575642  215236 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0801 23:22:20.581767  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:20.581809  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:20.589555  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:20.789913  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:20.790002  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:20.798866  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:20.990202  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:20.990268  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:20.998608  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:21.189811  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:21.189870  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:21.198263  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:21.390537  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:21.390625  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:21.398961  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:21.590265  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:21.590368  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:21.598606  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:21.789825  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:21.789914  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:21.798458  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:21.989688  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:21.989794  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:21.998280  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:22.190608  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:22.190669  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:22.198956  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:22.390272  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:22.390377  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:22.398769  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:22.590094  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:22.590154  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:22.598466  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:22.789650  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:22.789718  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:22.798193  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:22.990505  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:22.990569  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:22.999330  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:21.770246  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:23.770618  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:22.186918  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:24.686394  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:23.189932  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:23.190002  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:23.198523  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:23.389739  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:23.389814  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:23.398234  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:23.590540  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:23.590626  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:23.598842  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:23.598862  215236 api_server.go:165] Checking apiserver status ...
	I0801 23:22:23.598896  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0801 23:22:23.606311  215236 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:23.606348  215236 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0801 23:22:23.606357  215236 kubeadm.go:1092] stopping kube-system containers ...
	I0801 23:22:23.606371  215236 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0801 23:22:23.606425  215236 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0801 23:22:23.629577  215236 cri.go:87] found id: "94713f1791d832acf1c51396d9a8f9163f013511390c8cef36943008525ac293"
	I0801 23:22:23.629607  215236 cri.go:87] found id: "231cca998226aa1f02ba4a1ee285230c4e738ad37d614d5f2acd01d3e9b99589"
	I0801 23:22:23.629618  215236 cri.go:87] found id: "e1a3d20633a2fd862971253b1d46b80bfe33945caa2a47f731b396643661f31b"
	I0801 23:22:23.629629  215236 cri.go:87] found id: "75f19e2f62904ac8eda43316b17a77472055ab0001d8f09d8b27e5325b089d3d"
	I0801 23:22:23.629643  215236 cri.go:87] found id: "53dcfd989b18bc7f492c881d4bbe288ab2e9af52f7e7b50a4524d6c682364cee"
	I0801 23:22:23.629658  215236 cri.go:87] found id: "d1ca79121c5ba62bff354578943a8945e29314ec1e2613b874cccdd5bf78fee7"
	I0801 23:22:23.629671  215236 cri.go:87] found id: "050dc79811e13b31f20da6a64cf38f2ba16a1a5650f42904374ab914e85b1806"
	I0801 23:22:23.629680  215236 cri.go:87] found id: "a5338db9b2cb6025e3b902ae37c4456020e3dfb3bd6e56e85c02016fc8fff09c"
	I0801 23:22:23.629689  215236 cri.go:87] found id: ""
	I0801 23:22:23.629693  215236 cri.go:232] Stopping containers: [94713f1791d832acf1c51396d9a8f9163f013511390c8cef36943008525ac293 231cca998226aa1f02ba4a1ee285230c4e738ad37d614d5f2acd01d3e9b99589 e1a3d20633a2fd862971253b1d46b80bfe33945caa2a47f731b396643661f31b 75f19e2f62904ac8eda43316b17a77472055ab0001d8f09d8b27e5325b089d3d 53dcfd989b18bc7f492c881d4bbe288ab2e9af52f7e7b50a4524d6c682364cee d1ca79121c5ba62bff354578943a8945e29314ec1e2613b874cccdd5bf78fee7 050dc79811e13b31f20da6a64cf38f2ba16a1a5650f42904374ab914e85b1806 a5338db9b2cb6025e3b902ae37c4456020e3dfb3bd6e56e85c02016fc8fff09c]
	I0801 23:22:23.629732  215236 ssh_runner.go:195] Run: which crictl
	I0801 23:22:23.632353  215236 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 94713f1791d832acf1c51396d9a8f9163f013511390c8cef36943008525ac293 231cca998226aa1f02ba4a1ee285230c4e738ad37d614d5f2acd01d3e9b99589 e1a3d20633a2fd862971253b1d46b80bfe33945caa2a47f731b396643661f31b 75f19e2f62904ac8eda43316b17a77472055ab0001d8f09d8b27e5325b089d3d 53dcfd989b18bc7f492c881d4bbe288ab2e9af52f7e7b50a4524d6c682364cee d1ca79121c5ba62bff354578943a8945e29314ec1e2613b874cccdd5bf78fee7 050dc79811e13b31f20da6a64cf38f2ba16a1a5650f42904374ab914e85b1806 a5338db9b2cb6025e3b902ae37c4456020e3dfb3bd6e56e85c02016fc8fff09c
	I0801 23:22:23.657769  215236 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0801 23:22:23.667437  215236 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 23:22:23.673943  215236 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Aug  1 23:21 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug  1 23:21 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Aug  1 23:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug  1 23:21 /etc/kubernetes/scheduler.conf
	
	I0801 23:22:23.673986  215236 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0801 23:22:23.680514  215236 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0801 23:22:23.687406  215236 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0801 23:22:23.693718  215236 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:23.693756  215236 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0801 23:22:23.699739  215236 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0801 23:22:23.705806  215236 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0801 23:22:23.705849  215236 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0801 23:22:23.711981  215236 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 23:22:23.718714  215236 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0801 23:22:23.718739  215236 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 23:22:23.767076  215236 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 23:22:24.438300  215236 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0801 23:22:24.626543  215236 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 23:22:24.676912  215236 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0801 23:22:24.762642  215236 api_server.go:51] waiting for apiserver process to appear ...
	I0801 23:22:24.762708  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:22:25.333167  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:22:25.832732  215236 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:22:25.847890  215236 api_server.go:71] duration metric: took 1.085248102s to wait for apiserver process to appear ...
	I0801 23:22:25.847919  215236 api_server.go:87] waiting for apiserver healthz status ...
	I0801 23:22:25.847933  215236 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0801 23:22:25.848331  215236 api_server.go:256] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0801 23:22:26.349055  215236 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0801 23:22:26.272587  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:28.771222  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:27.186997  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:29.685584  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:29.056161  215236 api_server.go:266] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0801 23:22:29.056192  215236 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0801 23:22:29.348513  215236 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0801 23:22:29.353372  215236 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 23:22:29.353398  215236 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 23:22:29.848926  215236 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0801 23:22:29.853102  215236 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0801 23:22:29.853130  215236 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0801 23:22:30.348629  215236 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0801 23:22:30.353291  215236 api_server.go:266] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0801 23:22:30.359788  215236 api_server.go:140] control plane version: v1.24.3
	I0801 23:22:30.359832  215236 api_server.go:130] duration metric: took 4.511906477s to wait for apiserver health ...
	I0801 23:22:30.359843  215236 cni.go:95] Creating CNI manager for ""
	I0801 23:22:30.359850  215236 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0801 23:22:30.361837  215236 out.go:177] * Configuring CNI (Container Networking Interface) ...
	W0801 23:22:28.397703  164558 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0801 23:20:31.912492    7631 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0801 23:22:28.397765  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0801 23:22:29.096422  164558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 23:22:29.105994  164558 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 23:22:29.106042  164558 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 23:22:29.112733  164558 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 23:22:29.112770  164558 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 23:22:30.363359  215236 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0801 23:22:30.367356  215236 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.3/kubectl ...
	I0801 23:22:30.367378  215236 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0801 23:22:30.381611  215236 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0801 23:22:31.869390  215236 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.487706969s)
	I0801 23:22:31.869429  215236 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 23:22:31.876658  215236 system_pods.go:59] 9 kube-system pods found
	I0801 23:22:31.876692  215236 system_pods.go:61] "coredns-6d4b75cb6d-g57h2" [6b61b7ec-a16f-49c1-9eb5-0306ab547858] Running
	I0801 23:22:31.876699  215236 system_pods.go:61] "etcd-embed-certs-20220801232037-9849" [1f8de904-f457-4b3c-bc49-278c2d4c9896] Running
	I0801 23:22:31.876706  215236 system_pods.go:61] "kindnet-5c8nw" [8df61f1a-f22e-4dc4-a226-0a678642b9e5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0801 23:22:31.876711  215236 system_pods.go:61] "kube-apiserver-embed-certs-20220801232037-9849" [5479e618-c30d-4a70-bb2e-1b636e1ac7ae] Running
	I0801 23:22:31.876718  215236 system_pods.go:61] "kube-controller-manager-embed-certs-20220801232037-9849" [6cedc9e1-f16e-4187-a09e-42309c77b65f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0801 23:22:31.876727  215236 system_pods.go:61] "kube-proxy-llff8" [2d35a6cf-5862-4249-930a-08bcc09a9c7e] Running
	I0801 23:22:31.876736  215236 system_pods.go:61] "kube-scheduler-embed-certs-20220801232037-9849" [3fccc0bb-e77a-406e-aeb5-406fb1c9fb75] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0801 23:22:31.876746  215236 system_pods.go:61] "metrics-server-5c6f97fb75-6d5mq" [b4f7f93c-ab73-4250-93b2-57a7b1fc0e6e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 23:22:31.876754  215236 system_pods.go:61] "storage-provisioner" [a98dc448-7b12-4dcc-a8df-8c324f04b8d0] Running
	I0801 23:22:31.876759  215236 system_pods.go:74] duration metric: took 7.324023ms to wait for pod list to return data ...
	I0801 23:22:31.876769  215236 node_conditions.go:102] verifying NodePressure condition ...
	I0801 23:22:31.879252  215236 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0801 23:22:31.879276  215236 node_conditions.go:123] node cpu capacity is 8
	I0801 23:22:31.879287  215236 node_conditions.go:105] duration metric: took 2.510584ms to run NodePressure ...
	I0801 23:22:31.879301  215236 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0801 23:22:32.035197  215236 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0801 23:22:32.038882  215236 kubeadm.go:777] kubelet initialised
	I0801 23:22:32.038904  215236 kubeadm.go:778] duration metric: took 3.683172ms waiting for restarted kubelet to initialise ...
	I0801 23:22:32.038911  215236 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 23:22:32.044855  215236 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-g57h2" in "kube-system" namespace to be "Ready" ...
	I0801 23:22:32.051904  215236 pod_ready.go:92] pod "coredns-6d4b75cb6d-g57h2" in "kube-system" namespace has status "Ready":"True"
	I0801 23:22:32.051927  215236 pod_ready.go:81] duration metric: took 7.042632ms waiting for pod "coredns-6d4b75cb6d-g57h2" in "kube-system" namespace to be "Ready" ...
	I0801 23:22:32.051939  215236 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220801232037-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:22:32.055760  215236 pod_ready.go:92] pod "etcd-embed-certs-20220801232037-9849" in "kube-system" namespace has status "Ready":"True"
	I0801 23:22:32.055775  215236 pod_ready.go:81] duration metric: took 3.829702ms waiting for pod "etcd-embed-certs-20220801232037-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:22:32.055785  215236 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220801232037-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:22:32.059319  215236 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220801232037-9849" in "kube-system" namespace has status "Ready":"True"
	I0801 23:22:32.059337  215236 pod_ready.go:81] duration metric: took 3.545775ms waiting for pod "kube-apiserver-embed-certs-20220801232037-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:22:32.059347  215236 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220801232037-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:22:31.270955  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:33.770522  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:31.687413  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:34.186705  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:34.278196  215236 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220801232037-9849" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:36.278483  215236 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220801232037-9849" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:35.770870  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:37.770972  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:36.685917  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:39.186491  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:38.278652  215236 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220801232037-9849" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:38.777731  215236 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220801232037-9849" in "kube-system" namespace has status "Ready":"True"
	I0801 23:22:38.777766  215236 pod_ready.go:81] duration metric: took 6.718411528s waiting for pod "kube-controller-manager-embed-certs-20220801232037-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:22:38.777779  215236 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-llff8" in "kube-system" namespace to be "Ready" ...
	I0801 23:22:38.782461  215236 pod_ready.go:92] pod "kube-proxy-llff8" in "kube-system" namespace has status "Ready":"True"
	I0801 23:22:38.782477  215236 pod_ready.go:81] duration metric: took 4.691178ms waiting for pod "kube-proxy-llff8" in "kube-system" namespace to be "Ready" ...
	I0801 23:22:38.782485  215236 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220801232037-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:22:38.786215  215236 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220801232037-9849" in "kube-system" namespace has status "Ready":"True"
	I0801 23:22:38.786231  215236 pod_ready.go:81] duration metric: took 3.740108ms waiting for pod "kube-scheduler-embed-certs-20220801232037-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:22:38.786239  215236 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace to be "Ready" ...
	I0801 23:22:40.795913  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:40.269858  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:42.270742  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:44.270801  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:41.686195  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:44.185661  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:43.295054  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:45.295137  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:47.795360  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:46.770728  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:48.770873  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:46.186123  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:48.186296  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:50.295888  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:52.795230  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:51.270108  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:53.271204  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:50.686609  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:52.686729  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:55.186204  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:55.295354  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:57.295704  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:55.769671  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:57.770847  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:59.771161  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:57.186547  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:59.685700  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:22:59.795274  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:02.295562  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:02.271244  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:04.770200  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:01.685884  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:04.186221  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:04.796113  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:07.295091  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:07.270703  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:09.769735  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:06.186500  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:08.686309  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:09.295333  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:11.795652  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:11.770242  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:13.770672  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:10.686617  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:13.185617  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:15.186209  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:14.294903  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:16.295619  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:16.270823  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:18.770439  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:17.186312  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:19.685605  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:18.795320  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:21.295453  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:20.770511  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:22.770910  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:21.686621  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:24.187019  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:23.795625  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:25.795705  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:25.270690  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:27.770875  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:26.686226  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:29.186058  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:28.295155  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:30.795334  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:32.795554  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:30.270653  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:32.770236  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:31.686382  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:34.186234  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:35.295495  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:37.795570  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:35.270664  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:37.270730  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:39.770903  199300 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:36.685941  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:38.686301  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:40.295131  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:42.295457  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:42.265753  199300 pod_ready.go:81] duration metric: took 4m0.064378894s waiting for pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace to be "Ready" ...
	E0801 23:23:42.265775  199300 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-4kpcx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0801 23:23:42.265794  199300 pod_ready.go:38] duration metric: took 4m9.118271546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 23:23:42.265847  199300 kubeadm.go:630] restartCluster took 4m20.53202452s
	W0801 23:23:42.266030  199300 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0801 23:23:42.266074  199300 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0801 23:23:44.671838  199300 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.405740878s)
	I0801 23:23:44.671911  199300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 23:23:44.681352  199300 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 23:23:44.689010  199300 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 23:23:44.689064  199300 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 23:23:44.695519  199300 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 23:23:44.695561  199300 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 23:23:41.185531  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:43.185957  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:45.186422  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:44.295855  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:46.795675  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:47.186612  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:49.186700  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:48.796238  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:51.295507  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:53.341265  199300 out.go:204]   - Generating certificates and keys ...
	I0801 23:23:53.344035  199300 out.go:204]   - Booting up control plane ...
	I0801 23:23:53.346306  199300 out.go:204]   - Configuring RBAC rules ...
	I0801 23:23:53.348303  199300 cni.go:95] Creating CNI manager for ""
	I0801 23:23:53.348318  199300 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0801 23:23:53.349751  199300 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0801 23:23:53.350895  199300 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0801 23:23:53.354497  199300 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.3/kubectl ...
	I0801 23:23:53.354514  199300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0801 23:23:53.368565  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0801 23:23:54.045333  199300 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 23:23:54.045451  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:54.045470  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93 minikube.k8s.io/name=no-preload-20220801231743-9849 minikube.k8s.io/updated_at=2022_08_01T23_23_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:54.138515  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:54.138637  199300 ops.go:34] apiserver oom_adj: -16
	I0801 23:23:54.697075  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:51.686284  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:54.187028  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:53.295690  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:55.795064  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:57.796059  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:55.196678  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:55.697467  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:56.197414  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:56.697366  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:57.196539  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:57.696534  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:58.196544  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:58.697159  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:59.197179  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:59.697504  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:23:56.686799  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:23:59.186048  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:00.295099  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:02.295332  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:00.196612  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:00.696654  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:01.196860  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:01.697296  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:02.197212  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:02.697254  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:03.197434  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:03.697196  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:04.197155  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:04.696662  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:01.186548  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:03.685434  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:05.197078  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:05.696489  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:06.196539  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:06.696901  199300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:24:06.845228  199300 kubeadm.go:1045] duration metric: took 12.799835137s to wait for elevateKubeSystemPrivileges.
	I0801 23:24:06.845264  199300 kubeadm.go:397] StartCluster complete in 4m45.153285028s
	I0801 23:24:06.845286  199300 settings.go:142] acquiring lock: {Name:mk2834aeeab3549d1affce120eb20cd08fd78486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 23:24:06.845400  199300 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 23:24:06.846769  199300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mk908131de2da31ada6455cebc27e25fe21e4ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 23:24:07.361167  199300 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220801231743-9849" rescaled to 1
	I0801 23:24:07.361257  199300 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0801 23:24:07.361280  199300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 23:24:07.363347  199300 out.go:177] * Verifying Kubernetes components...
	I0801 23:24:07.361327  199300 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0801 23:24:07.361468  199300 config.go:180] Loaded profile config "no-preload-20220801231743-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 23:24:07.364864  199300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 23:24:07.363491  199300 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220801231743-9849"
	I0801 23:24:07.364932  199300 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220801231743-9849"
	W0801 23:24:07.364951  199300 addons.go:162] addon storage-provisioner should already be in state true
	I0801 23:24:07.363507  199300 addons.go:65] Setting metrics-server=true in profile "no-preload-20220801231743-9849"
	I0801 23:24:07.365025  199300 addons.go:153] Setting addon metrics-server=true in "no-preload-20220801231743-9849"
	I0801 23:24:07.363509  199300 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220801231743-9849"
	I0801 23:24:07.365062  199300 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220801231743-9849"
	I0801 23:24:07.363527  199300 addons.go:65] Setting dashboard=true in profile "no-preload-20220801231743-9849"
	I0801 23:24:07.365161  199300 addons.go:153] Setting addon dashboard=true in "no-preload-20220801231743-9849"
	W0801 23:24:07.365169  199300 addons.go:162] addon dashboard should already be in state true
	I0801 23:24:07.365215  199300 host.go:66] Checking if "no-preload-20220801231743-9849" exists ...
	I0801 23:24:07.364997  199300 host.go:66] Checking if "no-preload-20220801231743-9849" exists ...
	W0801 23:24:07.365044  199300 addons.go:162] addon metrics-server should already be in state true
	I0801 23:24:07.365344  199300 host.go:66] Checking if "no-preload-20220801231743-9849" exists ...
	I0801 23:24:07.365410  199300 cli_runner.go:164] Run: docker container inspect no-preload-20220801231743-9849 --format={{.State.Status}}
	I0801 23:24:07.365696  199300 cli_runner.go:164] Run: docker container inspect no-preload-20220801231743-9849 --format={{.State.Status}}
	I0801 23:24:07.365743  199300 cli_runner.go:164] Run: docker container inspect no-preload-20220801231743-9849 --format={{.State.Status}}
	I0801 23:24:07.365819  199300 cli_runner.go:164] Run: docker container inspect no-preload-20220801231743-9849 --format={{.State.Status}}
	I0801 23:24:07.413279  199300 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0801 23:24:07.414606  199300 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0801 23:24:07.414643  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0801 23:24:07.414696  199300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801231743-9849
	I0801 23:24:07.417344  199300 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220801231743-9849"
	W0801 23:24:07.417368  199300 addons.go:162] addon default-storageclass should already be in state true
	I0801 23:24:07.417397  199300 host.go:66] Checking if "no-preload-20220801231743-9849" exists ...
	I0801 23:24:07.417745  199300 cli_runner.go:164] Run: docker container inspect no-preload-20220801231743-9849 --format={{.State.Status}}
	I0801 23:24:07.421999  199300 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 23:24:07.423448  199300 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 23:24:07.423474  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0801 23:24:07.423533  199300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801231743-9849
	I0801 23:24:07.428320  199300 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0801 23:24:07.430497  199300 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0801 23:24:04.795368  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:06.795524  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:07.431818  199300 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0801 23:24:07.431838  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0801 23:24:07.431891  199300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801231743-9849
	I0801 23:24:07.448504  199300 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220801231743-9849" to be "Ready" ...
	I0801 23:24:07.448566  199300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0801 23:24:07.454938  199300 node_ready.go:49] node "no-preload-20220801231743-9849" has status "Ready":"True"
	I0801 23:24:07.454963  199300 node_ready.go:38] duration metric: took 6.428714ms waiting for node "no-preload-20220801231743-9849" to be "Ready" ...
	I0801 23:24:07.454974  199300 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 23:24:07.461965  199300 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-4sfph" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:07.472230  199300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801231743-9849/id_rsa Username:docker}
	I0801 23:24:07.478789  199300 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 23:24:07.478814  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 23:24:07.478867  199300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220801231743-9849
	I0801 23:24:07.488912  199300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801231743-9849/id_rsa Username:docker}
	I0801 23:24:07.489294  199300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801231743-9849/id_rsa Username:docker}
	I0801 23:24:07.524831  199300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/no-preload-20220801231743-9849/id_rsa Username:docker}
	I0801 23:24:07.644720  199300 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0801 23:24:07.644767  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0801 23:24:07.645052  199300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 23:24:07.645340  199300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 23:24:07.645382  199300 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0801 23:24:07.645395  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0801 23:24:07.659575  199300 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0801 23:24:07.659609  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0801 23:24:07.661083  199300 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0801 23:24:07.661103  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0801 23:24:07.734420  199300 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 23:24:07.734510  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0801 23:24:07.735199  199300 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0801 23:24:07.735228  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0801 23:24:07.752576  199300 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0801 23:24:07.752600  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0801 23:24:07.754482  199300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0801 23:24:07.841140  199300 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0801 23:24:07.841164  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0801 23:24:07.853305  199300 start.go:809] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0801 23:24:07.934333  199300 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0801 23:24:07.934392  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0801 23:24:07.952370  199300 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0801 23:24:07.952398  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0801 23:24:08.030914  199300 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0801 23:24:08.030944  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0801 23:24:08.051399  199300 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 23:24:08.051429  199300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0801 23:24:08.134477  199300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0801 23:24:08.537923  199300 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220801231743-9849"
	I0801 23:24:09.139160  199300 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.0046204s)
	I0801 23:24:09.141061  199300 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0801 23:24:09.143406  199300 addons.go:414] enableAddons completed in 1.782088616s
	I0801 23:24:09.535470  199300 pod_ready.go:102] pod "coredns-6d4b75cb6d-4sfph" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:05.686225  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:08.185760  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:10.187240  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:09.295064  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:11.795268  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:11.976400  199300 pod_ready.go:102] pod "coredns-6d4b75cb6d-4sfph" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:12.975061  199300 pod_ready.go:92] pod "coredns-6d4b75cb6d-4sfph" in "kube-system" namespace has status "Ready":"True"
	I0801 23:24:12.975092  199300 pod_ready.go:81] duration metric: took 5.513100194s waiting for pod "coredns-6d4b75cb6d-4sfph" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:12.975104  199300 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-zxjqv" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:12.685936  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:14.686073  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:14.986069  199300 pod_ready.go:92] pod "coredns-6d4b75cb6d-zxjqv" in "kube-system" namespace has status "Ready":"True"
	I0801 23:24:14.986098  199300 pod_ready.go:81] duration metric: took 2.01098644s waiting for pod "coredns-6d4b75cb6d-zxjqv" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:14.986111  199300 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220801231743-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:14.991292  199300 pod_ready.go:92] pod "etcd-no-preload-20220801231743-9849" in "kube-system" namespace has status "Ready":"True"
	I0801 23:24:14.991359  199300 pod_ready.go:81] duration metric: took 5.240279ms waiting for pod "etcd-no-preload-20220801231743-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:14.991385  199300 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220801231743-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:14.996407  199300 pod_ready.go:92] pod "kube-apiserver-no-preload-20220801231743-9849" in "kube-system" namespace has status "Ready":"True"
	I0801 23:24:14.996427  199300 pod_ready.go:81] duration metric: took 5.026773ms waiting for pod "kube-apiserver-no-preload-20220801231743-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:14.996436  199300 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220801231743-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:15.030563  199300 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220801231743-9849" in "kube-system" namespace has status "Ready":"True"
	I0801 23:24:15.030585  199300 pod_ready.go:81] duration metric: took 34.142623ms waiting for pod "kube-controller-manager-no-preload-20220801231743-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:15.030595  199300 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ghxsf" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:15.542475  199300 pod_ready.go:92] pod "kube-proxy-ghxsf" in "kube-system" namespace has status "Ready":"True"
	I0801 23:24:15.542509  199300 pod_ready.go:81] duration metric: took 511.90781ms waiting for pod "kube-proxy-ghxsf" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:15.542522  199300 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220801231743-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:15.865459  199300 pod_ready.go:92] pod "kube-scheduler-no-preload-20220801231743-9849" in "kube-system" namespace has status "Ready":"True"
	I0801 23:24:15.865482  199300 pod_ready.go:81] duration metric: took 322.951925ms waiting for pod "kube-scheduler-no-preload-20220801231743-9849" in "kube-system" namespace to be "Ready" ...
	I0801 23:24:15.865489  199300 pod_ready.go:38] duration metric: took 8.410504555s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 23:24:15.865508  199300 api_server.go:51] waiting for apiserver process to appear ...
	I0801 23:24:15.865541  199300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:24:15.876018  199300 api_server.go:71] duration metric: took 8.514726028s to wait for apiserver process to appear ...
	I0801 23:24:15.876039  199300 api_server.go:87] waiting for apiserver healthz status ...
	I0801 23:24:15.876048  199300 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0801 23:24:15.880433  199300 api_server.go:266] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0801 23:24:15.881183  199300 api_server.go:140] control plane version: v1.24.3
	I0801 23:24:15.881198  199300 api_server.go:130] duration metric: took 5.154761ms to wait for apiserver health ...
	I0801 23:24:15.881206  199300 system_pods.go:43] waiting for kube-system pods to appear ...
	I0801 23:24:16.019402  199300 system_pods.go:59] 10 kube-system pods found
	I0801 23:24:16.019432  199300 system_pods.go:61] "coredns-6d4b75cb6d-4sfph" [15e79e26-079a-4187-9721-7d9198d81e3a] Running
	I0801 23:24:16.019438  199300 system_pods.go:61] "coredns-6d4b75cb6d-zxjqv" [56d99479-4860-461d-a285-1fb99643ea9c] Running
	I0801 23:24:16.019443  199300 system_pods.go:61] "etcd-no-preload-20220801231743-9849" [d3f0a83d-221e-4e69-b34c-0eafb9fd25ff] Running
	I0801 23:24:16.019448  199300 system_pods.go:61] "kindnet-pv6lw" [13a04793-3439-4544-b6a9-6de0f890395b] Running
	I0801 23:24:16.019453  199300 system_pods.go:61] "kube-apiserver-no-preload-20220801231743-9849" [67fc4b0b-75ce-40c8-b504-ff173d27c567] Running
	I0801 23:24:16.019457  199300 system_pods.go:61] "kube-controller-manager-no-preload-20220801231743-9849" [684edcd5-0c53-40b4-8116-1d39306f720c] Running
	I0801 23:24:16.019461  199300 system_pods.go:61] "kube-proxy-ghxsf" [309b48fb-f2be-4d76-928d-a8634760db6b] Running
	I0801 23:24:16.019465  199300 system_pods.go:61] "kube-scheduler-no-preload-20220801231743-9849" [02dea01c-6781-4f82-8eb8-bf78b61715d6] Running
	I0801 23:24:16.019471  199300 system_pods.go:61] "metrics-server-5c6f97fb75-2jc27" [be69a2b8-95e7-407c-8af7-0289ac8c6c67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 23:24:16.019480  199300 system_pods.go:61] "storage-provisioner" [b9f9d99d-ebfb-42bd-aeae-8cf40d14d12b] Running
	I0801 23:24:16.019485  199300 system_pods.go:74] duration metric: took 138.275116ms to wait for pod list to return data ...
	I0801 23:24:16.019495  199300 default_sa.go:34] waiting for default service account to be created ...
	I0801 23:24:16.183938  199300 default_sa.go:45] found service account: "default"
	I0801 23:24:16.183964  199300 default_sa.go:55] duration metric: took 164.462447ms for default service account to be created ...
	I0801 23:24:16.183973  199300 system_pods.go:116] waiting for k8s-apps to be running ...
	I0801 23:24:16.414216  199300 system_pods.go:86] 10 kube-system pods found
	I0801 23:24:16.414298  199300 system_pods.go:89] "coredns-6d4b75cb6d-4sfph" [15e79e26-079a-4187-9721-7d9198d81e3a] Running
	I0801 23:24:16.414312  199300 system_pods.go:89] "coredns-6d4b75cb6d-zxjqv" [56d99479-4860-461d-a285-1fb99643ea9c] Running
	I0801 23:24:16.414322  199300 system_pods.go:89] "etcd-no-preload-20220801231743-9849" [d3f0a83d-221e-4e69-b34c-0eafb9fd25ff] Running
	I0801 23:24:16.414329  199300 system_pods.go:89] "kindnet-pv6lw" [13a04793-3439-4544-b6a9-6de0f890395b] Running
	I0801 23:24:16.414368  199300 system_pods.go:89] "kube-apiserver-no-preload-20220801231743-9849" [67fc4b0b-75ce-40c8-b504-ff173d27c567] Running
	I0801 23:24:16.414377  199300 system_pods.go:89] "kube-controller-manager-no-preload-20220801231743-9849" [684edcd5-0c53-40b4-8116-1d39306f720c] Running
	I0801 23:24:16.414383  199300 system_pods.go:89] "kube-proxy-ghxsf" [309b48fb-f2be-4d76-928d-a8634760db6b] Running
	I0801 23:24:16.414391  199300 system_pods.go:89] "kube-scheduler-no-preload-20220801231743-9849" [02dea01c-6781-4f82-8eb8-bf78b61715d6] Running
	I0801 23:24:16.414403  199300 system_pods.go:89] "metrics-server-5c6f97fb75-2jc27" [be69a2b8-95e7-407c-8af7-0289ac8c6c67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0801 23:24:16.414417  199300 system_pods.go:89] "storage-provisioner" [b9f9d99d-ebfb-42bd-aeae-8cf40d14d12b] Running
	I0801 23:24:16.414426  199300 system_pods.go:126] duration metric: took 230.447846ms to wait for k8s-apps to be running ...
	I0801 23:24:16.414434  199300 system_svc.go:44] waiting for kubelet service to be running ....
	I0801 23:24:16.414490  199300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 23:24:16.427408  199300 system_svc.go:56] duration metric: took 12.964743ms WaitForService to wait for kubelet.
	I0801 23:24:16.427439  199300 kubeadm.go:572] duration metric: took 9.066148374s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0801 23:24:16.427467  199300 node_conditions.go:102] verifying NodePressure condition ...
	I0801 23:24:16.584231  199300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0801 23:24:16.584259  199300 node_conditions.go:123] node cpu capacity is 8
	I0801 23:24:16.584273  199300 node_conditions.go:105] duration metric: took 156.800414ms to run NodePressure ...
	I0801 23:24:16.584286  199300 start.go:216] waiting for startup goroutines ...
	I0801 23:24:16.623341  199300 start.go:506] kubectl: 1.24.3, cluster: 1.24.3 (minor skew: 0)
	I0801 23:24:16.625787  199300 out.go:177] * Done! kubectl is now configured to use "no-preload-20220801231743-9849" cluster and "default" namespace by default
	I0801 23:24:13.796342  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:16.295628  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:16.686184  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:18.686805  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:18.296164  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:20.317607  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:22.794986  215236 pod_ready.go:102] pod "metrics-server-5c6f97fb75-6d5mq" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:25.028729  164558 out.go:204]   - Generating certificates and keys ...
	I0801 23:24:21.185955  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:23.686222  204372 pod_ready.go:102] pod "metrics-server-7958775c-xjr58" in "kube-system" namespace has status "Ready":"False"
	I0801 23:24:25.031634  164558 out.go:204]   - Booting up control plane ...
	I0801 23:24:25.033986  164558 kubeadm.go:397] StartCluster complete in 7m55.520543761s
	I0801 23:24:25.034044  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0801 23:24:25.034102  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0801 23:24:25.056715  164558 cri.go:87] found id: ""
	I0801 23:24:25.056736  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.056744  164558 logs.go:276] No container was found matching "kube-apiserver"
	I0801 23:24:25.056751  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0801 23:24:25.056807  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0801 23:24:25.078680  164558 cri.go:87] found id: ""
	I0801 23:24:25.078704  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.078709  164558 logs.go:276] No container was found matching "etcd"
	I0801 23:24:25.078715  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0801 23:24:25.078771  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0801 23:24:25.101528  164558 cri.go:87] found id: ""
	I0801 23:24:25.101553  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.101561  164558 logs.go:276] No container was found matching "coredns"
	I0801 23:24:25.101569  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0801 23:24:25.101619  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0801 23:24:25.126099  164558 cri.go:87] found id: ""
	I0801 23:24:25.126126  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.126133  164558 logs.go:276] No container was found matching "kube-scheduler"
	I0801 23:24:25.126142  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0801 23:24:25.126200  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0801 23:24:25.151037  164558 cri.go:87] found id: ""
	I0801 23:24:25.151067  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.151076  164558 logs.go:276] No container was found matching "kube-proxy"
	I0801 23:24:25.151084  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0801 23:24:25.151140  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0801 23:24:25.173426  164558 cri.go:87] found id: ""
	I0801 23:24:25.173452  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.173461  164558 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0801 23:24:25.173469  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0801 23:24:25.173518  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0801 23:24:25.195606  164558 cri.go:87] found id: ""
	I0801 23:24:25.195633  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.195640  164558 logs.go:276] No container was found matching "storage-provisioner"
	I0801 23:24:25.195648  164558 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0801 23:24:25.195704  164558 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0801 23:24:25.219819  164558 cri.go:87] found id: ""
	I0801 23:24:25.219840  164558 logs.go:274] 0 containers: []
	W0801 23:24:25.219846  164558 logs.go:276] No container was found matching "kube-controller-manager"
	I0801 23:24:25.219856  164558 logs.go:123] Gathering logs for containerd ...
	I0801 23:24:25.219865  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0801 23:24:25.265000  164558 logs.go:123] Gathering logs for container status ...
	I0801 23:24:25.265031  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0801 23:24:25.289751  164558 logs.go:123] Gathering logs for kubelet ...
	I0801 23:24:25.289777  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0801 23:24:25.340398  164558 logs.go:138] Found kubelet problem: Aug 01 23:24:24 kubernetes-upgrade-20220801231451-9849 kubelet[11658]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:24:25.385968  164558 logs.go:123] Gathering logs for dmesg ...
	I0801 23:24:25.386006  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0801 23:24:25.402731  164558 logs.go:123] Gathering logs for describe nodes ...
	I0801 23:24:25.402760  164558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0801 23:24:25.450999  164558 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0801 23:24:25.451038  164558 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0801 23:22:29.159019    9767 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0801 23:24:25.451072  164558 out.go:239] * 
	W0801 23:24:25.451303  164558 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0801 23:22:29.159019    9767 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 23:24:25.451337  164558 out.go:239] * 
	W0801 23:24:25.452624  164558 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0801 23:24:25.455427  164558 out.go:177] X Problems detected in kubelet:
	I0801 23:24:25.456749  164558 out.go:177]   Aug 01 23:24:24 kubernetes-upgrade-20220801231451-9849 kubelet[11658]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
	I0801 23:24:25.460294  164558 out.go:177] 
	W0801 23:24:25.461628  164558 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.24.3
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1013-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0801 23:22:29.159019    9767 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1013-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0801 23:24:25.461788  164558 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0801 23:24:25.461866  164558 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0801 23:24:25.464073  164558 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2022-08-01 23:15:53 UTC, end at Mon 2022-08-01 23:24:26 UTC. --
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.869046542Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.884830540Z" level=info msg="StopPodSandbox for \"this\""
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.884885918Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.901421096Z" level=info msg="StopPodSandbox for \"endpoint\""
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.901481958Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.930565015Z" level=info msg="StopPodSandbox for \"is\""
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.930621703Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.947273006Z" level=info msg="StopPodSandbox for \"deprecated,\""
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.947325594Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.963427990Z" level=info msg="StopPodSandbox for \"please\""
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.963483380Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.979747147Z" level=info msg="StopPodSandbox for \"consider\""
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.979808301Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.996758359Z" level=info msg="StopPodSandbox for \"using\""
	Aug 01 23:22:28 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:28.996811754Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
	Aug 01 23:22:29 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:29.012011751Z" level=info msg="StopPodSandbox for \"full\""
	Aug 01 23:22:29 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:29.012061025Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
	Aug 01 23:22:29 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:29.028425819Z" level=info msg="StopPodSandbox for \"URL\""
	Aug 01 23:22:29 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:29.028473023Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
	Aug 01 23:22:29 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:29.045508504Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Aug 01 23:22:29 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:29.045559832Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Aug 01 23:22:29 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:29.075009806Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Aug 01 23:22:29 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:29.075075834Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Aug 01 23:22:29 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:29.091449439Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Aug 01 23:22:29 kubernetes-upgrade-20220801231451-9849 containerd[507]: time="2022-08-01T23:22:29.091514420Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000004] ll header: 00000000: 02 42 eb c9 6c 30 02 42 c0 a8 4c 02 08 00
	[  +1.000051] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ff5116de268a
	[  +0.000007] ll header: 00000000: 02 42 eb c9 6c 30 02 42 c0 a8 4c 02 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ff5116de268a
	[  +0.000002] ll header: 00000000: 02 42 eb c9 6c 30 02 42 c0 a8 4c 02 08 00
	[  +0.003955] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ff5116de268a
	[  +0.000006] ll header: 00000000: 02 42 eb c9 6c 30 02 42 c0 a8 4c 02 08 00
	[  +2.011912] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ff5116de268a
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ff5116de268a
	[  +0.000005] ll header: 00000000: 02 42 eb c9 6c 30 02 42 c0 a8 4c 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 eb c9 6c 30 02 42 c0 a8 4c 02 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ff5116de268a
	[  +0.000002] ll header: 00000000: 02 42 eb c9 6c 30 02 42 c0 a8 4c 02 08 00
	[  +4.063618] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ff5116de268a
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ff5116de268a
	[  +0.000005] ll header: 00000000: 02 42 eb c9 6c 30 02 42 c0 a8 4c 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 eb c9 6c 30 02 42 c0 a8 4c 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ff5116de268a
	[  +0.000003] ll header: 00000000: 02 42 eb c9 6c 30 02 42 c0 a8 4c 02 08 00
	[  +8.195319] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ff5116de268a
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ff5116de268a
	[  +0.000004] ll header: 00000000: 02 42 eb c9 6c 30 02 42 c0 a8 4c 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 eb c9 6c 30 02 42 c0 a8 4c 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ff5116de268a
	[  +0.000001] ll header: 00000000: 02 42 eb c9 6c 30 02 42 c0 a8 4c 02 08 00
	
	* 
	* ==> kernel <==
	*  23:24:26 up  1:06,  0 users,  load average: 1.38, 2.07, 1.98
	Linux kubernetes-upgrade-20220801231451-9849 5.15.0-1013-gcp #18~20.04.1-Ubuntu SMP Sun Jul 3 08:20:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-08-01 23:15:53 UTC, end at Mon 2022-08-01 23:24:26 UTC. --
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --storage-driver-buffer-duration duration                  Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction (default 1m0s) (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --storage-driver-db string                                 database name (default "cadvisor") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --storage-driver-host string                               database host:port (default "localhost:8086") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --storage-driver-password string                           database password (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --storage-driver-secure                                    use secure connection with database (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --storage-driver-table string                              table name (default "stats") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --storage-driver-user string                               database username (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --streaming-connection-idle-timeout duration               Maximum time a streaming connection can be idle before the connection is automatically closed. 0 indicates no timeout. Example: '5m'. Note: All connections to the kubelet server have a maximum duration of 4 hours. (default 4h0m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --sync-frequency duration                                  Max period between synchronizing running containers and config (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --system-cgroups string                                    Optional absolute name of cgroups in which to place all non-kernel processes that are not already inside a cgroup under '/'. Empty for no container. Rolling back the flag requires a reboot. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --system-reserved mapStringString                          A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi) pairs that describe resources reserved for non-kubernetes components. Currently only cpu and memory are supported. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for more detail. [default=none] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --system-reserved-cgroup string                            Absolute name of the top level cgroup that is used to manage non-kubernetes components for which compute resources were reserved via '--system-reserved' flag. Ex. '/system-reserved'. [default=''] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --tls-cert-file string                                     File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --tls-cipher-suites strings                                Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:                 Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:                 Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --tls-min-version string                                   Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --tls-private-key-file string                              File containing x509 private key matching --tls-cert-file. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --topology-manager-policy string                           Topology Manager policy to use. Possible values: 'none', 'best-effort', 'restricted', 'single-numa-node'. (default "none") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --topology-manager-scope string                            Scope to which topology hints applied. Topology Manager collects hints from Hint Providers and applies them to defined scope to ensure the pod admission. Possible values: 'container', 'pod'. (default "container") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:   -v, --v Level                                                  number for the log level verbosity
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --version version[=true]                                   Print version information and quit
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --vmodule pattern=N,...                                    comma-separated list of pattern=N settings for file-filtered logging (only works for text log format)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --volume-plugin-dir string                                 The full path of the directory in which to search for additional third party volume plugins (default "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	Aug 01 23:24:26 kubernetes-upgrade-20220801231451-9849 kubelet[11828]:       --volume-stats-agg-period duration                         Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes.  To disable volume calculations, set to a negative number. (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 23:24:26.693511  225124 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220801231451-9849 -n kubernetes-upgrade-20220801231451-9849
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220801231451-9849 -n kubernetes-upgrade-20220801231451-9849: exit status 2 (390.951939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-20220801231451-9849" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220801231451-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220801231451-9849

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220801231451-9849: (2.237622651s)
--- FAIL: TestKubernetesUpgrade (577.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (527.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220801231635-9849 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd
E0801 23:28:03.163277    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220801231635-9849 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (8m47.490935928s)

                                                
                                                
-- stdout --
	* [calico-20220801231635-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14695
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node calico-20220801231635-9849 in cluster calico-20220801231635-9849
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.24.3 on containerd 1.6.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 23:27:50.682577  261584 out.go:296] Setting OutFile to fd 1 ...
	I0801 23:27:50.682671  261584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:27:50.682680  261584 out.go:309] Setting ErrFile to fd 2...
	I0801 23:27:50.682684  261584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:27:50.682787  261584 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 23:27:50.683353  261584 out.go:303] Setting JSON to false
	I0801 23:27:50.684880  261584 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4221,"bootTime":1659392250,"procs":841,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0801 23:27:50.684972  261584 start.go:125] virtualization: kvm guest
	I0801 23:27:50.687568  261584 out.go:177] * [calico-20220801231635-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0801 23:27:50.688955  261584 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 23:27:50.688922  261584 notify.go:193] Checking for updates...
	I0801 23:27:50.690234  261584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 23:27:50.691578  261584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 23:27:50.692918  261584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 23:27:50.694406  261584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0801 23:27:50.696098  261584 config.go:180] Loaded profile config "cilium-20220801231635-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 23:27:50.696225  261584 config.go:180] Loaded profile config "default-k8s-different-port-20220801232429-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 23:27:50.696300  261584 config.go:180] Loaded profile config "kindnet-20220801231634-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 23:27:50.696352  261584 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 23:27:50.735152  261584 docker.go:137] docker version: linux-20.10.17
	I0801 23:27:50.735248  261584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 23:27:50.845756  261584 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-08-01 23:27:50.765320541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 23:27:50.845882  261584 docker.go:254] overlay module found
	I0801 23:27:50.848910  261584 out.go:177] * Using the docker driver based on user configuration
	I0801 23:27:50.850216  261584 start.go:284] selected driver: docker
	I0801 23:27:50.850229  261584 start.go:808] validating driver "docker" against <nil>
	I0801 23:27:50.850252  261584 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 23:27:50.851134  261584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 23:27:50.955687  261584 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-08-01 23:27:50.879838907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 23:27:50.955854  261584 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0801 23:27:50.956087  261584 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0801 23:27:50.958170  261584 out.go:177] * Using Docker driver with root privileges
	I0801 23:27:50.959466  261584 cni.go:95] Creating CNI manager for "calico"
	I0801 23:27:50.959484  261584 start_flags.go:305] Found "Calico" CNI - setting NetworkPlugin=cni
	I0801 23:27:50.959495  261584 start_flags.go:310] config:
	{Name:calico-20220801231635-9849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:calico-20220801231635-9849 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 23:27:50.961084  261584 out.go:177] * Starting control plane node calico-20220801231635-9849 in cluster calico-20220801231635-9849
	I0801 23:27:50.962391  261584 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0801 23:27:50.963608  261584 out.go:177] * Pulling base image ...
	I0801 23:27:50.964798  261584 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0801 23:27:50.964829  261584 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 23:27:50.964834  261584 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4
	I0801 23:27:50.964952  261584 cache.go:57] Caching tarball of preloaded images
	I0801 23:27:50.965183  261584 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0801 23:27:50.965211  261584 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on containerd
	I0801 23:27:50.965310  261584 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/config.json ...
	I0801 23:27:50.965335  261584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/config.json: {Name:mke581ba33385cff08f03dd4e505aca3b5369db2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 23:27:51.002077  261584 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull
	I0801 23:27:51.002105  261584 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load
	I0801 23:27:51.002116  261584 cache.go:208] Successfully downloaded all kic artifacts
	I0801 23:27:51.002155  261584 start.go:371] acquiring machines lock for calico-20220801231635-9849: {Name:mka4d42ed8c2261a2224129c48ba723254a086b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0801 23:27:51.002288  261584 start.go:375] acquired machines lock for "calico-20220801231635-9849" in 107.331µs
	I0801 23:27:51.002324  261584 start.go:92] Provisioning new machine with config: &{Name:calico-20220801231635-9849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:calico-20220801231635-9849 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0801 23:27:51.002483  261584 start.go:132] createHost starting for "" (driver="docker")
	I0801 23:27:51.004403  261584 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0801 23:27:51.004683  261584 start.go:166] libmachine.API.Create for "calico-20220801231635-9849" (driver="docker")
	I0801 23:27:51.004726  261584 client.go:168] LocalClient.Create starting
	I0801 23:27:51.004802  261584 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem
	I0801 23:27:51.004844  261584 main.go:134] libmachine: Decoding PEM data...
	I0801 23:27:51.004869  261584 main.go:134] libmachine: Parsing certificate...
	I0801 23:27:51.004966  261584 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem
	I0801 23:27:51.004997  261584 main.go:134] libmachine: Decoding PEM data...
	I0801 23:27:51.005012  261584 main.go:134] libmachine: Parsing certificate...
	I0801 23:27:51.005426  261584 cli_runner.go:164] Run: docker network inspect calico-20220801231635-9849 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0801 23:27:51.041919  261584 cli_runner.go:211] docker network inspect calico-20220801231635-9849 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0801 23:27:51.041991  261584 network_create.go:272] running [docker network inspect calico-20220801231635-9849] to gather additional debugging logs...
	I0801 23:27:51.042011  261584 cli_runner.go:164] Run: docker network inspect calico-20220801231635-9849
	W0801 23:27:51.079619  261584 cli_runner.go:211] docker network inspect calico-20220801231635-9849 returned with exit code 1
	I0801 23:27:51.079650  261584 network_create.go:275] error running [docker network inspect calico-20220801231635-9849]: docker network inspect calico-20220801231635-9849: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220801231635-9849
	I0801 23:27:51.079666  261584 network_create.go:277] output of [docker network inspect calico-20220801231635-9849]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220801231635-9849
	
	** /stderr **
	I0801 23:27:51.079709  261584 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0801 23:27:51.115179  261584 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-af4b0fae74e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:2f:1e:99:f8}}
	I0801 23:27:51.116205  261584 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-cc181ede96d5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:48:34:33:cd}}
	I0801 23:27:51.116862  261584 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-0072d9760f25 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:4b:31:cc:21}}
	I0801 23:27:51.117944  261584 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc000010a50] misses:0}
	I0801 23:27:51.117979  261584 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0801 23:27:51.117997  261584 network_create.go:115] attempt to create docker network calico-20220801231635-9849 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0801 23:27:51.118044  261584 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220801231635-9849 calico-20220801231635-9849
	I0801 23:27:51.193005  261584 network_create.go:99] docker network calico-20220801231635-9849 192.168.76.0/24 created
	I0801 23:27:51.193046  261584 kic.go:106] calculated static IP "192.168.76.2" for the "calico-20220801231635-9849" container
	I0801 23:27:51.193107  261584 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0801 23:27:51.235364  261584 cli_runner.go:164] Run: docker volume create calico-20220801231635-9849 --label name.minikube.sigs.k8s.io=calico-20220801231635-9849 --label created_by.minikube.sigs.k8s.io=true
	I0801 23:27:51.277996  261584 oci.go:103] Successfully created a docker volume calico-20220801231635-9849
	I0801 23:27:51.278085  261584 cli_runner.go:164] Run: docker run --rm --name calico-20220801231635-9849-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220801231635-9849 --entrypoint /usr/bin/test -v calico-20220801231635-9849:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -d /var/lib
	I0801 23:27:51.917596  261584 oci.go:107] Successfully prepared a docker volume calico-20220801231635-9849
	I0801 23:27:51.917650  261584 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0801 23:27:51.917676  261584 kic.go:179] Starting extracting preloaded images to volume ...
	I0801 23:27:51.917745  261584 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220801231635-9849:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0801 23:27:58.811852  261584 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220801231635-9849:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.89401162s)
	I0801 23:27:58.811883  261584 kic.go:188] duration metric: took 6.894203 seconds to extract preloaded images to volume
	W0801 23:27:58.812014  261584 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0801 23:27:58.812135  261584 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0801 23:27:58.926536  261584 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220801231635-9849 --name calico-20220801231635-9849 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220801231635-9849 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220801231635-9849 --network calico-20220801231635-9849 --ip 192.168.76.2 --volume calico-20220801231635-9849:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8
	I0801 23:27:59.329548  261584 cli_runner.go:164] Run: docker container inspect calico-20220801231635-9849 --format={{.State.Running}}
	I0801 23:27:59.369235  261584 cli_runner.go:164] Run: docker container inspect calico-20220801231635-9849 --format={{.State.Status}}
	I0801 23:27:59.403335  261584 cli_runner.go:164] Run: docker exec calico-20220801231635-9849 stat /var/lib/dpkg/alternatives/iptables
	I0801 23:27:59.464241  261584 oci.go:144] the created container "calico-20220801231635-9849" has a running status.
	I0801 23:27:59.464272  261584 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/calico-20220801231635-9849/id_rsa...
	I0801 23:27:59.893081  261584 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/calico-20220801231635-9849/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0801 23:27:59.987896  261584 cli_runner.go:164] Run: docker container inspect calico-20220801231635-9849 --format={{.State.Status}}
	I0801 23:28:00.025352  261584 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0801 23:28:00.025381  261584 kic_runner.go:114] Args: [docker exec --privileged calico-20220801231635-9849 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0801 23:28:00.109163  261584 cli_runner.go:164] Run: docker container inspect calico-20220801231635-9849 --format={{.State.Status}}
	I0801 23:28:00.143620  261584 machine.go:88] provisioning docker machine ...
	I0801 23:28:00.143655  261584 ubuntu.go:169] provisioning hostname "calico-20220801231635-9849"
	I0801 23:28:00.143710  261584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220801231635-9849
	I0801 23:28:00.175383  261584 main.go:134] libmachine: Using SSH client type: native
	I0801 23:28:00.175566  261584 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0801 23:28:00.175584  261584 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220801231635-9849 && echo "calico-20220801231635-9849" | sudo tee /etc/hostname
	I0801 23:28:00.302566  261584 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220801231635-9849
	
	I0801 23:28:00.302649  261584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220801231635-9849
	I0801 23:28:00.336635  261584 main.go:134] libmachine: Using SSH client type: native
	I0801 23:28:00.336766  261584 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0801 23:28:00.336787  261584 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220801231635-9849' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220801231635-9849/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220801231635-9849' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0801 23:28:00.449667  261584 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0801 23:28:00.449695  261584 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube}
	I0801 23:28:00.449737  261584 ubuntu.go:177] setting up certificates
	I0801 23:28:00.449756  261584 provision.go:83] configureAuth start
	I0801 23:28:00.449805  261584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220801231635-9849
	I0801 23:28:00.482525  261584 provision.go:138] copyHostCerts
	I0801 23:28:00.482575  261584 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem, removing ...
	I0801 23:28:00.482584  261584 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem
	I0801 23:28:00.482655  261584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cert.pem (1123 bytes)
	I0801 23:28:00.482786  261584 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem, removing ...
	I0801 23:28:00.482800  261584 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem
	I0801 23:28:00.482847  261584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/key.pem (1675 bytes)
	I0801 23:28:00.482944  261584 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem, removing ...
	I0801 23:28:00.482958  261584 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem
	I0801 23:28:00.482992  261584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.pem (1082 bytes)
	I0801 23:28:00.483063  261584 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem org=jenkins.calico-20220801231635-9849 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220801231635-9849]
	I0801 23:28:00.596097  261584 provision.go:172] copyRemoteCerts
	I0801 23:28:00.596154  261584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0801 23:28:00.596185  261584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220801231635-9849
	I0801 23:28:00.630420  261584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/calico-20220801231635-9849/id_rsa Username:docker}
	I0801 23:28:00.713650  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0801 23:28:00.732500  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0801 23:28:00.751578  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0801 23:28:00.768431  261584 provision.go:86] duration metric: configureAuth took 318.663241ms
	I0801 23:28:00.768456  261584 ubuntu.go:193] setting minikube options for container-runtime
	I0801 23:28:00.768628  261584 config.go:180] Loaded profile config "calico-20220801231635-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 23:28:00.768644  261584 machine.go:91] provisioned docker machine in 625.003732ms
	I0801 23:28:00.768650  261584 client.go:171] LocalClient.Create took 9.763918376s
	I0801 23:28:00.768669  261584 start.go:174] duration metric: libmachine.API.Create for "calico-20220801231635-9849" took 9.763986949s
	I0801 23:28:00.768679  261584 start.go:307] post-start starting for "calico-20220801231635-9849" (driver="docker")
	I0801 23:28:00.768685  261584 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0801 23:28:00.768728  261584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0801 23:28:00.768764  261584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220801231635-9849
	I0801 23:28:00.801881  261584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/calico-20220801231635-9849/id_rsa Username:docker}
	I0801 23:28:00.886118  261584 ssh_runner.go:195] Run: cat /etc/os-release
	I0801 23:28:00.888928  261584 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0801 23:28:00.888959  261584 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0801 23:28:00.888970  261584 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0801 23:28:00.888976  261584 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0801 23:28:00.888985  261584 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/addons for local assets ...
	I0801 23:28:00.889036  261584 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files for local assets ...
	I0801 23:28:00.889107  261584 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/98492.pem -> 98492.pem in /etc/ssl/certs
	I0801 23:28:00.889182  261584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0801 23:28:00.896639  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/98492.pem --> /etc/ssl/certs/98492.pem (1708 bytes)
	I0801 23:28:00.914399  261584 start.go:310] post-start completed in 145.703303ms
	I0801 23:28:00.914938  261584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220801231635-9849
	I0801 23:28:00.949548  261584 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/config.json ...
	I0801 23:28:00.949848  261584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 23:28:00.949902  261584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220801231635-9849
	I0801 23:28:00.985052  261584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/calico-20220801231635-9849/id_rsa Username:docker}
	I0801 23:28:01.066794  261584 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0801 23:28:01.070672  261584 start.go:135] duration metric: createHost completed in 10.068175275s
	I0801 23:28:01.070696  261584 start.go:82] releasing machines lock for "calico-20220801231635-9849", held for 10.068390865s
	I0801 23:28:01.070782  261584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220801231635-9849
	I0801 23:28:01.104891  261584 ssh_runner.go:195] Run: systemctl --version
	I0801 23:28:01.104954  261584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220801231635-9849
	I0801 23:28:01.104965  261584 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0801 23:28:01.105041  261584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220801231635-9849
	I0801 23:28:01.141346  261584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/calico-20220801231635-9849/id_rsa Username:docker}
	I0801 23:28:01.142905  261584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/calico-20220801231635-9849/id_rsa Username:docker}
	I0801 23:28:01.252862  261584 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0801 23:28:01.262923  261584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0801 23:28:01.271605  261584 docker.go:188] disabling docker service ...
	I0801 23:28:01.271656  261584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0801 23:28:01.287742  261584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0801 23:28:01.297115  261584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0801 23:28:01.375199  261584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0801 23:28:01.451451  261584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0801 23:28:01.460649  261584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0801 23:28:01.473413  261584 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0801 23:28:01.481475  261584 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0801 23:28:01.489582  261584 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0801 23:28:01.497404  261584 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I0801 23:28:01.505211  261584 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0801 23:28:01.512540  261584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0801 23:28:01.524948  261584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0801 23:28:01.531284  261584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0801 23:28:01.537932  261584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0801 23:28:01.610723  261584 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0801 23:28:01.696307  261584 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0801 23:28:01.696423  261584 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0801 23:28:01.700370  261584 start.go:471] Will wait 60s for crictl version
	I0801 23:28:01.700430  261584 ssh_runner.go:195] Run: sudo crictl version
	I0801 23:28:01.728858  261584 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-08-01T23:28:01Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0801 23:28:12.776404  261584 ssh_runner.go:195] Run: sudo crictl version
	I0801 23:28:12.798789  261584 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.6
	RuntimeApiVersion:  v1alpha2
	I0801 23:28:12.798856  261584 ssh_runner.go:195] Run: containerd --version
	I0801 23:28:12.833253  261584 ssh_runner.go:195] Run: containerd --version
	I0801 23:28:12.866214  261584 out.go:177] * Preparing Kubernetes v1.24.3 on containerd 1.6.6 ...
	I0801 23:28:12.867620  261584 cli_runner.go:164] Run: docker network inspect calico-20220801231635-9849 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0801 23:28:12.904063  261584 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0801 23:28:12.907990  261584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 23:28:12.918577  261584 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime containerd
	I0801 23:28:12.918636  261584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0801 23:28:12.944054  261584 containerd.go:547] all images are preloaded for containerd runtime.
	I0801 23:28:12.944083  261584 containerd.go:461] Images already preloaded, skipping extraction
	I0801 23:28:12.944155  261584 ssh_runner.go:195] Run: sudo crictl images --output json
	I0801 23:28:12.969186  261584 containerd.go:547] all images are preloaded for containerd runtime.
	I0801 23:28:12.969210  261584 cache_images.go:84] Images are preloaded, skipping loading
	I0801 23:28:12.969262  261584 ssh_runner.go:195] Run: sudo crictl info
	I0801 23:28:12.995835  261584 cni.go:95] Creating CNI manager for "calico"
	I0801 23:28:12.995869  261584 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0801 23:28:12.995886  261584 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220801231635-9849 NodeName:calico-20220801231635-9849 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0801 23:28:12.996068  261584 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-20220801231635-9849"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0801 23:28:12.996176  261584 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-20220801231635-9849 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:calico-20220801231635-9849 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0801 23:28:12.996242  261584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0801 23:28:13.003997  261584 binaries.go:44] Found k8s binaries, skipping transfer
	I0801 23:28:13.004071  261584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0801 23:28:13.012142  261584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (519 bytes)
	I0801 23:28:13.026251  261584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0801 23:28:13.041703  261584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2055 bytes)
	I0801 23:28:13.054838  261584 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0801 23:28:13.057952  261584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0801 23:28:13.067787  261584 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849 for IP: 192.168.76.2
	I0801 23:28:13.067895  261584 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key
	I0801 23:28:13.067950  261584 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key
	I0801 23:28:13.068023  261584 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/client.key
	I0801 23:28:13.068042  261584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/client.crt with IP's: []
	I0801 23:28:13.182893  261584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/client.crt ...
	I0801 23:28:13.182924  261584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/client.crt: {Name:mke0f5e1febebb3b2a8e28218b0c59769bc4ce11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 23:28:13.183127  261584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/client.key ...
	I0801 23:28:13.183149  261584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/client.key: {Name:mk4d35fac956126f8042cd40763de8c0883dc2b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 23:28:13.183265  261584 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/apiserver.key.31bdca25
	I0801 23:28:13.183284  261584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0801 23:28:13.584748  261584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/apiserver.crt.31bdca25 ...
	I0801 23:28:13.584785  261584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/apiserver.crt.31bdca25: {Name:mk521408ce27b24991b15fc68831e2009efe3efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 23:28:13.585005  261584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/apiserver.key.31bdca25 ...
	I0801 23:28:13.585025  261584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/apiserver.key.31bdca25: {Name:mkcad74788cb30db66a91ab05458414d9a2b173b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 23:28:13.585135  261584 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/apiserver.crt
	I0801 23:28:13.585211  261584 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/apiserver.key
	I0801 23:28:13.585281  261584 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/proxy-client.key
	I0801 23:28:13.585302  261584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/proxy-client.crt with IP's: []
	I0801 23:28:13.720744  261584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/proxy-client.crt ...
	I0801 23:28:13.720770  261584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/proxy-client.crt: {Name:mk36ddf66a959237adf122b26a4644d8aee6f5cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 23:28:13.720970  261584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/proxy-client.key ...
	I0801 23:28:13.720987  261584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/proxy-client.key: {Name:mk7e2ab386f184af3637636c1d50c98e84e607f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 23:28:13.721164  261584 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/9849.pem (1338 bytes)
	W0801 23:28:13.721201  261584 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/9849_empty.pem, impossibly tiny 0 bytes
	I0801 23:28:13.721214  261584 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca-key.pem (1679 bytes)
	I0801 23:28:13.721239  261584 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/ca.pem (1082 bytes)
	I0801 23:28:13.721264  261584 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/cert.pem (1123 bytes)
	I0801 23:28:13.721286  261584 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/key.pem (1675 bytes)
	I0801 23:28:13.721332  261584 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/98492.pem (1708 bytes)
	I0801 23:28:13.721921  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0801 23:28:13.741091  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0801 23:28:13.758973  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0801 23:28:13.782606  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/calico-20220801231635-9849/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0801 23:28:13.800929  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0801 23:28:13.818018  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0801 23:28:13.835641  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0801 23:28:13.853582  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0801 23:28:13.872220  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/certs/9849.pem --> /usr/share/ca-certificates/9849.pem (1338 bytes)
	I0801 23:28:13.889582  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/ssl/certs/98492.pem --> /usr/share/ca-certificates/98492.pem (1708 bytes)
	I0801 23:28:13.907141  261584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0801 23:28:13.924283  261584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0801 23:28:13.936644  261584 ssh_runner.go:195] Run: openssl version
	I0801 23:28:13.941348  261584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0801 23:28:13.948548  261584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0801 23:28:13.951548  261584 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug  1 22:46 /usr/share/ca-certificates/minikubeCA.pem
	I0801 23:28:13.951589  261584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0801 23:28:13.956113  261584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0801 23:28:13.963060  261584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9849.pem && ln -fs /usr/share/ca-certificates/9849.pem /etc/ssl/certs/9849.pem"
	I0801 23:28:13.970134  261584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9849.pem
	I0801 23:28:13.973060  261584 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Aug  1 22:50 /usr/share/ca-certificates/9849.pem
	I0801 23:28:13.973103  261584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9849.pem
	I0801 23:28:13.977618  261584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9849.pem /etc/ssl/certs/51391683.0"
	I0801 23:28:13.984511  261584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98492.pem && ln -fs /usr/share/ca-certificates/98492.pem /etc/ssl/certs/98492.pem"
	I0801 23:28:13.991831  261584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98492.pem
	I0801 23:28:13.995073  261584 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Aug  1 22:50 /usr/share/ca-certificates/98492.pem
	I0801 23:28:13.995127  261584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98492.pem
	I0801 23:28:14.000228  261584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98492.pem /etc/ssl/certs/3ec20f2e.0"
	I0801 23:28:14.007523  261584 kubeadm.go:395] StartCluster: {Name:calico-20220801231635-9849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:calico-20220801231635-9849 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 23:28:14.007607  261584 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0801 23:28:14.007642  261584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0801 23:28:14.031182  261584 cri.go:87] found id: ""
	I0801 23:28:14.031237  261584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0801 23:28:14.037939  261584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0801 23:28:14.044730  261584 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0801 23:28:14.044777  261584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0801 23:28:14.051434  261584 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0801 23:28:14.051476  261584 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0801 23:28:14.315269  261584 out.go:204]   - Generating certificates and keys ...
	I0801 23:28:17.038072  261584 out.go:204]   - Booting up control plane ...
	I0801 23:28:24.584397  261584 out.go:204]   - Configuring RBAC rules ...
	I0801 23:28:24.999635  261584 cni.go:95] Creating CNI manager for "calico"
	I0801 23:28:25.001872  261584 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0801 23:28:25.003618  261584 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.3/kubectl ...
	I0801 23:28:25.003645  261584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202050 bytes)
	I0801 23:28:25.046508  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0801 23:28:26.469941  261584 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.423389925s)
	I0801 23:28:26.470000  261584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0801 23:28:26.470088  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:26.470131  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=6048763279beb839e4a2f4b298ecea1c5d280a93 minikube.k8s.io/name=calico-20220801231635-9849 minikube.k8s.io/updated_at=2022_08_01T23_28_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:26.648221  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:26.732667  261584 ops.go:34] apiserver oom_adj: -16
	I0801 23:28:27.339553  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:27.840065  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:28.339915  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:28.840043  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:29.339547  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:29.839825  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:30.340392  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:30.840157  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:31.339453  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:31.840240  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:32.339441  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:32.839717  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:33.340307  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:33.840265  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:34.339738  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:34.839990  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:35.340111  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:35.839739  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:36.339597  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:36.839931  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:37.339676  261584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0801 23:28:37.419750  261584 kubeadm.go:1045] duration metric: took 10.949720448s to wait for elevateKubeSystemPrivileges.
	I0801 23:28:37.419787  261584 kubeadm.go:397] StartCluster complete in 23.41226749s
	I0801 23:28:37.419809  261584 settings.go:142] acquiring lock: {Name:mk2834aeeab3549d1affce120eb20cd08fd78486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 23:28:37.419926  261584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 23:28:37.421769  261584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig: {Name:mk908131de2da31ada6455cebc27e25fe21e4ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 23:28:37.941280  261584 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220801231635-9849" rescaled to 1
	I0801 23:28:37.941356  261584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0801 23:28:37.941376  261584 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0801 23:28:37.941429  261584 addons.go:65] Setting storage-provisioner=true in profile "calico-20220801231635-9849"
	I0801 23:28:37.941446  261584 addons.go:153] Setting addon storage-provisioner=true in "calico-20220801231635-9849"
	W0801 23:28:37.941451  261584 addons.go:162] addon storage-provisioner should already be in state true
	I0801 23:28:37.941470  261584 addons.go:65] Setting default-storageclass=true in profile "calico-20220801231635-9849"
	I0801 23:28:37.941495  261584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220801231635-9849"
	I0801 23:28:37.941349  261584 start.go:211] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0801 23:28:37.945030  261584 out.go:177] * Verifying Kubernetes components...
	I0801 23:28:37.941499  261584 host.go:66] Checking if "calico-20220801231635-9849" exists ...
	I0801 23:28:37.941561  261584 config.go:180] Loaded profile config "calico-20220801231635-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 23:28:37.941913  261584 cli_runner.go:164] Run: docker container inspect calico-20220801231635-9849 --format={{.State.Status}}
	I0801 23:28:37.946606  261584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 23:28:37.947100  261584 cli_runner.go:164] Run: docker container inspect calico-20220801231635-9849 --format={{.State.Status}}
	I0801 23:28:37.998057  261584 addons.go:153] Setting addon default-storageclass=true in "calico-20220801231635-9849"
	W0801 23:28:37.998088  261584 addons.go:162] addon default-storageclass should already be in state true
	I0801 23:28:37.998124  261584 host.go:66] Checking if "calico-20220801231635-9849" exists ...
	I0801 23:28:37.998660  261584 cli_runner.go:164] Run: docker container inspect calico-20220801231635-9849 --format={{.State.Status}}
	I0801 23:28:38.010965  261584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0801 23:28:38.012471  261584 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 23:28:38.012492  261584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0801 23:28:38.012550  261584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220801231635-9849
	I0801 23:28:38.050156  261584 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0801 23:28:38.050188  261584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0801 23:28:38.050256  261584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220801231635-9849
	I0801 23:28:38.053741  261584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0801 23:28:38.055037  261584 node_ready.go:35] waiting up to 5m0s for node "calico-20220801231635-9849" to be "Ready" ...
	I0801 23:28:38.058768  261584 node_ready.go:49] node "calico-20220801231635-9849" has status "Ready":"True"
	I0801 23:28:38.058791  261584 node_ready.go:38] duration metric: took 3.726367ms waiting for node "calico-20220801231635-9849" to be "Ready" ...
	I0801 23:28:38.058801  261584 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 23:28:38.068641  261584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/calico-20220801231635-9849/id_rsa Username:docker}
	I0801 23:28:38.068707  261584 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace to be "Ready" ...
	I0801 23:28:38.097744  261584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/calico-20220801231635-9849/id_rsa Username:docker}
	I0801 23:28:38.245862  261584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0801 23:28:38.343666  261584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0801 23:28:39.569896  261584 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.516114035s)
	I0801 23:28:39.569991  261584 start.go:809] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0801 23:28:40.413122  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:28:40.589067  261584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.343152361s)
	I0801 23:28:40.589148  261584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.245447065s)
	I0801 23:28:40.603683  261584 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0801 23:28:40.609062  261584 addons.go:414] enableAddons completed in 2.667684161s
	I0801 23:28:42.581561  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:28:44.582427  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:28:47.081119  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:28:49.081160  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:28:51.081354  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:28:53.081691  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:28:55.081927  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:28:57.580824  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:28:59.581782  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:02.080559  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:04.081573  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:06.081986  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:08.580618  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:10.581573  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:13.081688  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:15.580463  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:17.581422  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:19.581571  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:22.080733  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:24.081004  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:26.081614  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:28.081703  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:30.081842  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:32.081944  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:34.581779  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:37.081597  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:39.581150  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:41.581229  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:43.581603  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:46.081966  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:48.581206  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:51.080999  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:53.081044  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:55.081302  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:29:57.581860  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:00.081998  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:02.581720  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:05.081494  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:07.083596  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:09.580437  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:11.581386  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:14.081766  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:16.081937  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:18.582073  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:21.081982  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:23.580716  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:25.581050  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:27.581650  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:30.081234  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:32.081359  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:34.081732  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:36.581289  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:38.581509  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:41.082009  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:43.082934  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:45.580865  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:47.581656  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:50.080978  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:52.081194  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:54.581404  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:57.080982  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:30:59.081199  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:01.581144  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:03.585922  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:06.080937  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:08.600128  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:11.080665  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:13.081604  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:15.081953  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:17.581232  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:20.081676  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:22.580940  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:24.581191  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:27.081280  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:29.081517  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:31.580717  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:33.581152  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:35.581706  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:38.081349  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:40.581551  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:42.582276  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:45.080746  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:47.081386  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:49.581089  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:52.081847  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:54.581533  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:57.081540  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:31:59.081893  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:01.581692  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:04.081794  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:06.580962  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:08.581245  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:11.081594  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:13.582066  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:16.081826  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:18.580572  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:20.581654  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:23.081859  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:25.580583  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:27.581467  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:30.081216  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:32.081990  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:34.581023  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:36.581404  261584 pod_ready.go:102] pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:38.085034  261584 pod_ready.go:81] duration metric: took 4m0.016306459s waiting for pod "calico-kube-controllers-c44b4545-dl9vs" in "kube-system" namespace to be "Ready" ...
	E0801 23:32:38.085056  261584 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0801 23:32:38.085068  261584 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-r7tgr" in "kube-system" namespace to be "Ready" ...
	I0801 23:32:40.097209  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:42.596360  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:44.596626  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:47.096700  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:49.594602  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:51.595629  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:53.596030  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:56.096114  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:32:58.096335  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:00.596688  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:03.096169  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:05.595682  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:07.596264  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:09.596403  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:11.596619  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:14.096523  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:16.595271  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:18.596076  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:21.096444  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:23.097031  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:25.596429  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:27.597334  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:30.096414  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:32.595322  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:34.596553  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:37.096485  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:39.097936  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:41.595582  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:43.596302  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:46.095940  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:48.596261  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:51.095875  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:53.096366  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:55.098527  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:33:57.596200  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:00.095982  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:02.097129  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:04.596204  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:07.095852  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:09.097693  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:11.595762  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:13.596357  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:16.097024  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:18.596274  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:21.096582  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:23.595494  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:25.596787  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:28.096064  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:30.096947  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:32.596116  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:34.596437  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:36.596643  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:39.097253  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:41.595632  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:43.596692  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:46.095861  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:48.096305  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:50.096396  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:52.096792  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:54.596355  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:57.096153  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:34:59.096386  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:01.596027  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:04.097164  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:06.595797  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:08.596585  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:11.095934  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:13.097065  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:15.595791  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:18.096257  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:20.096361  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:22.595843  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:24.596104  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:26.596320  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:28.596735  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:31.097254  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:33.595866  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:36.099285  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:38.596469  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:41.096151  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:43.097198  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:45.595661  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:47.597959  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:50.096937  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:52.596583  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:55.096757  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:57.097669  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:35:59.097752  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:01.098638  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:03.595490  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:05.596777  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:08.096060  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:10.596062  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:13.097071  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:15.596172  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:18.096395  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:20.596383  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:23.096022  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:25.096091  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:27.096367  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:29.096497  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:31.595532  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:33.596180  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:36.095950  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:38.097069  261584 pod_ready.go:102] pod "calico-node-r7tgr" in "kube-system" namespace has status "Ready":"False"
	I0801 23:36:38.102055  261584 pod_ready.go:81] duration metric: took 4m0.01697566s waiting for pod "calico-node-r7tgr" in "kube-system" namespace to be "Ready" ...
	E0801 23:36:38.102073  261584 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0801 23:36:38.102112  261584 pod_ready.go:38] duration metric: took 8m0.043297567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0801 23:36:38.104577  261584 out.go:177] 
	W0801 23:36:38.106212  261584 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0801 23:36:38.106238  261584 out.go:239] * 
	* 
	W0801 23:36:38.107042  261584 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0801 23:36:38.108313  261584 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (527.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (352.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:30:15.876108    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:30:16.350555    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127004889s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130505049s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128000065s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:30:56.836985    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:31:06.208360    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12344476s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:31:18.428844    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801231743-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146522902s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.119015902s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:32:01.694102    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
E0801 23:32:01.699371    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
E0801 23:32:01.709653    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
E0801 23:32:01.729929    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
E0801 23:32:01.770189    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
E0801 23:32:01.850509    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
E0801 23:32:02.010895    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
E0801 23:32:02.331483    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
E0801 23:32:02.972576    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
E0801 23:32:04.253205    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
E0801 23:32:06.814322    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.124903065s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0801 23:32:11.935527    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:32:18.757974    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:32:22.175977    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.117302376s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0801 23:32:39.696733    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 23:32:42.656653    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132579784s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0801 23:33:03.163506    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 23:33:15.347395    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
E0801 23:33:15.352665    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
E0801 23:33:15.362896    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
E0801 23:33:15.383144    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
E0801 23:33:15.423384    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
E0801 23:33:15.503688    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
E0801 23:33:15.664728    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
E0801 23:33:15.984900    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
E0801 23:33:16.625592    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
E0801 23:33:17.906093    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
E0801 23:33:19.395532    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
E0801 23:33:20.466961    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
E0801 23:33:23.616850    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
E0801 23:33:25.587973    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
E0801 23:33:34.584618    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801231743-9849/client.crt: no such file or directory
E0801 23:33:35.828101    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134344125s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0801 23:33:52.264505    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
E0801 23:33:53.545180    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
E0801 23:33:56.105586    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
E0801 23:33:56.309040    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:34:31.947533    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129503843s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0801 23:34:45.537459    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.120221467s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (352.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (363.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:33:50.986167    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
E0801 23:33:50.991409    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
E0801 23:33:51.001634    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
E0801 23:33:51.021866    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
E0801 23:33:51.062101    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
E0801 23:33:51.143132    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
E0801 23:33:51.303521    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
E0801 23:33:51.624077    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133487367s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0801 23:34:01.225725    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:34:02.269494    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801231743-9849/client.crt: no such file or directory
E0801 23:34:11.466583    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.119468354s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.107621927s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:34:34.915128    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:34:37.269506    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12695985s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:35:02.598443    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.117447858s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:35:12.908417    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147026291s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128266453s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125825541s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0801 23:36:34.828750    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141564904s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0801 23:37:01.693776    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:37:29.378847    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/auto-20220801231634-9849/client.crt: no such file or directory
E0801 23:37:39.697726    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127336068s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0801 23:38:03.163550    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 23:38:15.347719    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:38:34.585302    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801231743-9849/client.crt: no such file or directory
E0801 23:38:43.030314    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/kindnet-20220801231634-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129628459s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0801 23:38:50.985306    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
E0801 23:39:18.669593    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/cilium-20220801231635-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
E0801 23:39:34.914919    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130657153s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (363.16s)

                                                
                                    

Test pass (247/275)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.24
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.24.3/json-events 5.01
11 TestDownloadOnly/v1.24.3/preload-exists 0
15 TestDownloadOnly/v1.24.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.32
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.2
18 TestDownloadOnlyKic 3.69
19 TestBinaryMirror 0.89
20 TestOffline 82.02
22 TestAddons/Setup 114.33
24 TestAddons/parallel/Registry 17.38
25 TestAddons/parallel/Ingress 26.81
26 TestAddons/parallel/MetricsServer 5.47
27 TestAddons/parallel/HelmTiller 17.01
29 TestAddons/parallel/CSI 42.27
30 TestAddons/parallel/Headlamp 10.04
32 TestAddons/serial/GCPAuth 41.58
33 TestAddons/StoppedEnableDisable 20.3
34 TestCertOptions 30.98
35 TestCertExpiration 236.28
37 TestForceSystemdFlag 34.26
38 TestForceSystemdEnv 44.14
39 TestKVMDriverInstallOrUpdate 5.65
43 TestErrorSpam/setup 34.78
44 TestErrorSpam/start 1
45 TestErrorSpam/status 1.15
46 TestErrorSpam/pause 1.6
47 TestErrorSpam/unpause 1.6
48 TestErrorSpam/stop 20.39
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 45.58
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 15.39
55 TestFunctional/serial/KubeContext 0.04
56 TestFunctional/serial/KubectlGetPods 0.06
59 TestFunctional/serial/CacheCmd/cache/add_remote 4.25
60 TestFunctional/serial/CacheCmd/cache/add_local 2.18
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
62 TestFunctional/serial/CacheCmd/cache/list 0.07
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.37
64 TestFunctional/serial/CacheCmd/cache/cache_reload 2.38
65 TestFunctional/serial/CacheCmd/cache/delete 0.19
66 TestFunctional/serial/MinikubeKubectlCmd 0.12
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
68 TestFunctional/serial/ExtraConfig 72.83
69 TestFunctional/serial/ComponentHealth 0.06
70 TestFunctional/serial/LogsCmd 1.1
73 TestFunctional/parallel/ConfigCmd 0.53
74 TestFunctional/parallel/DashboardCmd 13.05
75 TestFunctional/parallel/DryRun 0.56
76 TestFunctional/parallel/InternationalLanguage 0.23
77 TestFunctional/parallel/StatusCmd 1.31
80 TestFunctional/parallel/ServiceCmd 11.13
81 TestFunctional/parallel/ServiceCmdConnect 8.88
82 TestFunctional/parallel/AddonsCmd 0.22
83 TestFunctional/parallel/PersistentVolumeClaim 37.63
85 TestFunctional/parallel/SSHCmd 0.88
86 TestFunctional/parallel/CpCmd 1.79
87 TestFunctional/parallel/MySQL 25.97
88 TestFunctional/parallel/FileSync 0.41
89 TestFunctional/parallel/CertSync 2.66
93 TestFunctional/parallel/NodeLabels 0.06
95 TestFunctional/parallel/NonActiveRuntimeDisabled 0.88
97 TestFunctional/parallel/ProfileCmd/profile_not_create 0.61
98 TestFunctional/parallel/Version/short 0.15
99 TestFunctional/parallel/Version/components 1.06
100 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
101 TestFunctional/parallel/ImageCommands/ImageListTable 0.36
102 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
103 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
104 TestFunctional/parallel/ImageCommands/ImageBuild 4.36
105 TestFunctional/parallel/ImageCommands/Setup 1.54
106 TestFunctional/parallel/ProfileCmd/profile_list 0.5
107 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
108 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
109 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
110 TestFunctional/parallel/ProfileCmd/profile_json_output 0.56
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.17
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 21.2
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.58
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.52
118 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.82
119 TestFunctional/parallel/ImageCommands/ImageRemove 0.77
120 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.56
121 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.28
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
128 TestFunctional/parallel/MountCmd/any-port 10
129 TestFunctional/parallel/MountCmd/specific-port 2.23
130 TestFunctional/delete_addon-resizer_images 0.1
131 TestFunctional/delete_my-image_image 0.03
132 TestFunctional/delete_minikube_cached_images 0.03
135 TestIngressAddonLegacy/StartLegacyK8sCluster 73.97
137 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.18
138 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.39
139 TestIngressAddonLegacy/serial/ValidateIngressAddons 40.38
142 TestJSONOutput/start/Command 44.94
143 TestJSONOutput/start/Audit 0
145 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/pause/Command 0.68
149 TestJSONOutput/pause/Audit 0
151 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/unpause/Command 0.62
155 TestJSONOutput/unpause/Audit 0
157 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/stop/Command 20.16
161 TestJSONOutput/stop/Audit 0
163 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
165 TestErrorJSONOutput 0.3
167 TestKicCustomNetwork/create_custom_network 35.94
168 TestKicCustomNetwork/use_default_bridge_network 30.3
169 TestKicExistingNetwork 29.78
170 TestKicCustomSubnet 29.64
171 TestMainNoArgs 0.06
172 TestMinikubeProfile 64.31
175 TestMountStart/serial/StartWithMountFirst 5.16
176 TestMountStart/serial/VerifyMountFirst 0.36
177 TestMountStart/serial/StartWithMountSecond 5
178 TestMountStart/serial/VerifyMountSecond 0.34
179 TestMountStart/serial/DeleteFirst 1.82
180 TestMountStart/serial/VerifyMountPostDelete 0.35
181 TestMountStart/serial/Stop 1.27
182 TestMountStart/serial/RestartStopped 6.72
183 TestMountStart/serial/VerifyMountPostStop 0.34
186 TestMultiNode/serial/FreshStart2Nodes 91.34
187 TestMultiNode/serial/DeployApp2Nodes 4.22
188 TestMultiNode/serial/PingHostFrom2Pods 0.84
189 TestMultiNode/serial/AddNode 35.25
190 TestMultiNode/serial/ProfileList 0.42
191 TestMultiNode/serial/CopyFile 12.2
192 TestMultiNode/serial/StopNode 2.46
193 TestMultiNode/serial/StartAfterStop 30.9
194 TestMultiNode/serial/RestartKeepsNodes 171.8
195 TestMultiNode/serial/DeleteNode 5.14
196 TestMultiNode/serial/StopMultiNode 40.34
197 TestMultiNode/serial/RestartMultiNode 106.19
198 TestMultiNode/serial/ValidateNameConflict 26.51
203 TestPreload 115.16
205 TestScheduledStopUnix 101.53
208 TestInsufficientStorage 16.3
209 TestRunningBinaryUpgrade 113.03
212 TestMissingContainerUpgrade 144.13
214 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
219 TestStoppedBinaryUpgrade/Setup 0.48
223 TestNoKubernetes/serial/StartWithK8s 50.8
224 TestStoppedBinaryUpgrade/Upgrade 136.08
225 TestNoKubernetes/serial/StartWithStopK8s 5.89
226 TestNoKubernetes/serial/Start 4.9
227 TestNoKubernetes/serial/VerifyK8sNotRunning 0.44
228 TestNoKubernetes/serial/ProfileList 2.34
229 TestNoKubernetes/serial/Stop 1.36
230 TestNoKubernetes/serial/StartNoArgs 7.08
231 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.45
233 TestPause/serial/Start 73.64
234 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
242 TestNetworkPlugins/group/false 0.49
246 TestPause/serial/SecondStartNoReconfiguration 15.8
247 TestPause/serial/Pause 0.8
248 TestPause/serial/VerifyStatus 0.42
249 TestPause/serial/Unpause 0.71
250 TestPause/serial/PauseAgain 1.01
251 TestPause/serial/DeletePaused 3.62
252 TestPause/serial/VerifyDeletedResources 5.91
254 TestStartStop/group/old-k8s-version/serial/FirstStart 118.94
256 TestStartStop/group/no-preload/serial/FirstStart 50.96
257 TestStartStop/group/no-preload/serial/DeployApp 9.43
258 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.62
259 TestStartStop/group/no-preload/serial/Stop 20.17
260 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
261 TestStartStop/group/no-preload/serial/SecondStart 312.49
262 TestStartStop/group/old-k8s-version/serial/DeployApp 9.32
263 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.71
264 TestStartStop/group/old-k8s-version/serial/Stop 20.16
265 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
266 TestStartStop/group/old-k8s-version/serial/SecondStart 448.02
268 TestStartStop/group/embed-certs/serial/FirstStart 55.32
269 TestStartStop/group/embed-certs/serial/DeployApp 9.4
270 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.66
271 TestStartStop/group/embed-certs/serial/Stop 20.18
272 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
273 TestStartStop/group/embed-certs/serial/SecondStart 316.42
274 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
275 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
276 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.42
277 TestStartStop/group/no-preload/serial/Pause 3.37
279 TestStartStop/group/default-k8s-different-port/serial/FirstStart 56.39
281 TestStartStop/group/newest-cni/serial/FirstStart 37.19
282 TestStartStop/group/newest-cni/serial/DeployApp 0
283 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.57
284 TestStartStop/group/newest-cni/serial/Stop 20.27
285 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.32
286 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.71
287 TestStartStop/group/default-k8s-different-port/serial/Stop 20.29
288 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
289 TestStartStop/group/newest-cni/serial/SecondStart 31.37
290 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.22
291 TestStartStop/group/default-k8s-different-port/serial/SecondStart 560.48
292 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
293 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
294 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.43
295 TestStartStop/group/newest-cni/serial/Pause 3.43
296 TestNetworkPlugins/group/auto/Start 47
297 TestNetworkPlugins/group/auto/KubeletFlags 0.37
298 TestNetworkPlugins/group/auto/NetCatPod 10.18
299 TestNetworkPlugins/group/auto/DNS 0.15
300 TestNetworkPlugins/group/auto/Localhost 0.13
301 TestNetworkPlugins/group/auto/HairPin 0.14
302 TestNetworkPlugins/group/kindnet/Start 59.78
303 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
304 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
305 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.38
306 TestStartStop/group/embed-certs/serial/Pause 3.22
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
308 TestNetworkPlugins/group/cilium/Start 74.8
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.63
311 TestStartStop/group/old-k8s-version/serial/Pause 3.98
313 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
314 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
315 TestNetworkPlugins/group/kindnet/NetCatPod 10.19
316 TestNetworkPlugins/group/kindnet/DNS 0.15
317 TestNetworkPlugins/group/kindnet/Localhost 0.14
318 TestNetworkPlugins/group/kindnet/HairPin 0.12
319 TestNetworkPlugins/group/enable-default-cni/Start 301.67
320 TestNetworkPlugins/group/cilium/ControllerPod 5.02
321 TestNetworkPlugins/group/cilium/KubeletFlags 0.39
322 TestNetworkPlugins/group/cilium/NetCatPod 10.86
323 TestNetworkPlugins/group/cilium/DNS 0.17
324 TestNetworkPlugins/group/cilium/Localhost 0.16
325 TestNetworkPlugins/group/cilium/HairPin 0.12
326 TestNetworkPlugins/group/bridge/Start 40.67
327 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
328 TestNetworkPlugins/group/bridge/NetCatPod 10.22
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.19
333 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.01
334 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.07
335 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.39
336 TestStartStop/group/default-k8s-different-port/serial/Pause 3.22
x
+
TestDownloadOnly/v1.16.0/json-events (14.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220801224520-9849 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220801224520-9849 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (14.244639165s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220801224520-9849
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220801224520-9849: exit status 85 (80.143414ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	| Command |               Args                |              Profile              |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p        | download-only-20220801224520-9849 | jenkins | v1.26.0 | 01 Aug 22 22:45 UTC |          |
	|         | download-only-20220801224520-9849 |                                   |         |         |                     |          |
	|         | --force --alsologtostderr         |                                   |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                                   |         |         |                     |          |
	|         | --container-runtime=containerd    |                                   |         |         |                     |          |
	|         | --driver=docker                   |                                   |         |         |                     |          |
	|         | --container-runtime=containerd    |                                   |         |         |                     |          |
	|---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 22:45:20
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 22:45:20.693753    9861 out.go:296] Setting OutFile to fd 1 ...
	I0801 22:45:20.693850    9861 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 22:45:20.693858    9861 out.go:309] Setting ErrFile to fd 2...
	I0801 22:45:20.693862    9861 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 22:45:20.693962    9861 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	W0801 22:45:20.694078    9861 root.go:310] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/config/config.json: no such file or directory
	I0801 22:45:20.694732    9861 out.go:303] Setting JSON to true
	I0801 22:45:20.695543    9861 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1671,"bootTime":1659392250,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0801 22:45:20.695600    9861 start.go:125] virtualization: kvm guest
	I0801 22:45:20.698361    9861 out.go:97] [download-only-20220801224520-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0801 22:45:20.699651    9861 out.go:169] MINIKUBE_LOCATION=14695
	I0801 22:45:20.698476    9861 notify.go:193] Checking for updates...
	W0801 22:45:20.698480    9861 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball: no such file or directory
	I0801 22:45:20.701982    9861 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 22:45:20.703228    9861 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 22:45:20.704277    9861 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 22:45:20.706134    9861 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0801 22:45:20.708397    9861 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0801 22:45:20.708558    9861 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 22:45:20.744177    9861 docker.go:137] docker version: linux-20.10.17
	I0801 22:45:20.744235    9861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 22:45:21.465544    9861 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2022-08-01 22:45:20.769769834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 22:45:21.465644    9861 docker.go:254] overlay module found
	I0801 22:45:21.467731    9861 out.go:97] Using the docker driver based on user configuration
	I0801 22:45:21.467750    9861 start.go:284] selected driver: docker
	I0801 22:45:21.467758    9861 start.go:808] validating driver "docker" against <nil>
	I0801 22:45:21.467846    9861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 22:45:21.567544    9861 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-08-01 22:45:21.494956251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 22:45:21.567672    9861 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0801 22:45:21.568173    9861 start_flags.go:377] Using suggested 8000MB memory alloc based on sys=32103MB, container=32103MB
	I0801 22:45:21.568279    9861 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0801 22:45:21.570264    9861 out.go:169] Using Docker driver with root privileges
	I0801 22:45:21.571676    9861 cni.go:95] Creating CNI manager for ""
	I0801 22:45:21.571693    9861 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0801 22:45:21.571707    9861 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0801 22:45:21.571716    9861 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0801 22:45:21.571721    9861 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni
	I0801 22:45:21.571733    9861 start_flags.go:310] config:
	{Name:download-only-20220801224520-9849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220801224520-9849 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 22:45:21.573293    9861 out.go:97] Starting control plane node download-only-20220801224520-9849 in cluster download-only-20220801224520-9849
	I0801 22:45:21.573318    9861 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0801 22:45:21.574747    9861 out.go:97] Pulling base image ...
	I0801 22:45:21.574776    9861 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0801 22:45:21.574805    9861 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon
	I0801 22:45:21.602594    9861 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 to local cache
	I0801 22:45:21.602865    9861 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local cache directory
	I0801 22:45:21.602983    9861 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 to local cache
	I0801 22:45:21.677923    9861 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0801 22:45:21.677949    9861 cache.go:57] Caching tarball of preloaded images
	I0801 22:45:21.678127    9861 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0801 22:45:21.680510    9861 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0801 22:45:21.680530    9861 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0801 22:45:21.788631    9861 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0801 22:45:24.309036    9861 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0801 22:45:24.309115    9861 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0801 22:45:25.170804    9861 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0801 22:45:25.171115    9861 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/download-only-20220801224520-9849/config.json ...
	I0801 22:45:25.171179    9861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/download-only-20220801224520-9849/config.json: {Name:mkb52f8ab55c0fb3b5c330d20ce698409d137a29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0801 22:45:25.171341    9861 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0801 22:45:25.171540    9861 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220801224520-9849"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/json-events (5.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220801224520-9849 --force --alsologtostderr --kubernetes-version=v1.24.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220801224520-9849 --force --alsologtostderr --kubernetes-version=v1.24.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.012445959s)
--- PASS: TestDownloadOnly/v1.24.3/json-events (5.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/preload-exists
--- PASS: TestDownloadOnly/v1.24.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220801224520-9849
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220801224520-9849: exit status 85 (76.595852ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	| Command |               Args                |              Profile              |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p        | download-only-20220801224520-9849 | jenkins | v1.26.0 | 01 Aug 22 22:45 UTC |          |
	|         | download-only-20220801224520-9849 |                                   |         |         |                     |          |
	|         | --force --alsologtostderr         |                                   |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                                   |         |         |                     |          |
	|         | --container-runtime=containerd    |                                   |         |         |                     |          |
	|         | --driver=docker                   |                                   |         |         |                     |          |
	|         | --container-runtime=containerd    |                                   |         |         |                     |          |
	| start   | -o=json --download-only -p        | download-only-20220801224520-9849 | jenkins | v1.26.0 | 01 Aug 22 22:45 UTC |          |
	|         | download-only-20220801224520-9849 |                                   |         |         |                     |          |
	|         | --force --alsologtostderr         |                                   |         |         |                     |          |
	|         | --kubernetes-version=v1.24.3      |                                   |         |         |                     |          |
	|         | --container-runtime=containerd    |                                   |         |         |                     |          |
	|         | --driver=docker                   |                                   |         |         |                     |          |
	|         | --container-runtime=containerd    |                                   |         |         |                     |          |
	|---------|-----------------------------------|-----------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/08/01 22:45:35
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0801 22:45:35.022828   10026 out.go:296] Setting OutFile to fd 1 ...
	I0801 22:45:35.022933   10026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 22:45:35.022943   10026 out.go:309] Setting ErrFile to fd 2...
	I0801 22:45:35.022948   10026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 22:45:35.023035   10026 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	W0801 22:45:35.023141   10026 root.go:310] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/config/config.json: no such file or directory
	I0801 22:45:35.023522   10026 out.go:303] Setting JSON to true
	I0801 22:45:35.024239   10026 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1685,"bootTime":1659392250,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0801 22:45:35.024292   10026 start.go:125] virtualization: kvm guest
	I0801 22:45:35.026553   10026 out.go:97] [download-only-20220801224520-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0801 22:45:35.028321   10026 out.go:169] MINIKUBE_LOCATION=14695
	I0801 22:45:35.026695   10026 notify.go:193] Checking for updates...
	I0801 22:45:35.031284   10026 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 22:45:35.032721   10026 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 22:45:35.034329   10026 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 22:45:35.035904   10026 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220801224520-9849"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.24.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220801224520-9849
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.69s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220801224540-9849 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220801224540-9849 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (2.592504713s)
helpers_test.go:175: Cleaning up "download-docker-20220801224540-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220801224540-9849
--- PASS: TestDownloadOnlyKic (3.69s)

                                                
                                    
x
+
TestBinaryMirror (0.89s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220801224544-9849 --alsologtostderr --binary-mirror http://127.0.0.1:45783 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-20220801224544-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220801224544-9849
--- PASS: TestBinaryMirror (0.89s)

                                                
                                    
x
+
TestOffline (82.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20220801231329-9849 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20220801231329-9849 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m17.324861832s)
helpers_test.go:175: Cleaning up "offline-containerd-20220801231329-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20220801231329-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220801231329-9849: (4.694891827s)
--- PASS: TestOffline (82.02s)

                                                
                                    
x
+
TestAddons/Setup (114.33s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220801224545-9849 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220801224545-9849 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m54.326290517s)
--- PASS: TestAddons/Setup (114.33s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 8.58054ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-fkknj" [4753a5c6-07fd-4b18-b9c3-0601316544e5] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008947395s
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-8xn5m" [947d936f-c3d3-4a43-8de6-691be7057058] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018054113s
addons_test.go:292: (dbg) Run:  kubectl --context addons-20220801224545-9849 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Run:  kubectl --context addons-20220801224545-9849 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-20220801224545-9849 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.599540985s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220801224545-9849 ip
2022/08/01 22:47:56 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:340: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220801224545-9849 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.38s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (26.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-20220801224545-9849 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Done: kubectl --context addons-20220801224545-9849 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (5.610431864s)
addons_test.go:184: (dbg) Run:  kubectl --context addons-20220801224545-9849 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:184: (dbg) Done: kubectl --context addons-20220801224545-9849 replace --force -f testdata/nginx-ingress-v1.yaml: (1.473871666s)
addons_test.go:197: (dbg) Run:  kubectl --context addons-20220801224545-9849 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [8fce5557-cccb-4ce1-b025-30e349c26d5b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [8fce5557-cccb-4ce1-b025-30e349c26d5b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005417603s
addons_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220801224545-9849 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Run:  kubectl --context addons-20220801224545-9849 replace --force -f testdata/ingress-dns-example-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220801224545-9849 ip
addons_test.go:249: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220801224545-9849 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:258: (dbg) Done: out/minikube-linux-amd64 -p addons-20220801224545-9849 addons disable ingress-dns --alsologtostderr -v=1: (1.216589083s)
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220801224545-9849 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p addons-20220801224545-9849 addons disable ingress --alsologtostderr -v=1: (7.525933939s)
--- PASS: TestAddons/parallel/Ingress (26.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 8.110598ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-8595bd7d4c-mzvdk" [dc3fa641-b183-4b66-aa85-bbc547692ead] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009013354s
addons_test.go:367: (dbg) Run:  kubectl --context addons-20220801224545-9849 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:384: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220801224545-9849 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.47s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (17.01s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 2.000822ms
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-c7d76457b-p2gp7" [da0cd480-ad78-4275-9fde-fb814ec02220] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007713974s
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220801224545-9849 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-20220801224545-9849 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.660144787s)
addons_test.go:430: kubectl --context addons-20220801224545-9849 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220801224545-9849 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-20220801224545-9849 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.556915938s)
addons_test.go:442: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220801224545-9849 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (17.01s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 11.043351ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-20220801224545-9849 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220801224545-9849 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220801224545-9849 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-20220801224545-9849 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [d5daa64b-5325-44f1-be02-c3c300f87d41] Pending
helpers_test.go:342: "task-pv-pod" [d5daa64b-5325-44f1-be02-c3c300f87d41] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [d5daa64b-5325-44f1-be02-c3c300f87d41] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.006475282s
addons_test.go:536: (dbg) Run:  kubectl --context addons-20220801224545-9849 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220801224545-9849 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:425: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220801224545-9849 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-20220801224545-9849 delete pod task-pv-pod
addons_test.go:552: (dbg) Run:  kubectl --context addons-20220801224545-9849 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-20220801224545-9849 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220801224545-9849 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-20220801224545-9849 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [3db976ba-64c2-4545-8451-30a7e370c336] Pending
helpers_test.go:342: "task-pv-pod-restore" [3db976ba-64c2-4545-8451-30a7e370c336] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [3db976ba-64c2-4545-8451-30a7e370c336] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.006877256s
addons_test.go:578: (dbg) Run:  kubectl --context addons-20220801224545-9849 delete pod task-pv-pod-restore
addons_test.go:582: (dbg) Run:  kubectl --context addons-20220801224545-9849 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-20220801224545-9849 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220801224545-9849 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:590: (dbg) Done: out/minikube-linux-amd64 -p addons-20220801224545-9849 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.837167243s)
addons_test.go:594: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220801224545-9849 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.27s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-20220801224545-9849 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-866f5bd7bc-sw2jx" [02dbc6e4-5450-470e-8c20-f6a0e7d8b4b8] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-866f5bd7bc-sw2jx" [02dbc6e4-5450-470e-8c20-f6a0e7d8b4b8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-866f5bd7bc-sw2jx" [02dbc6e4-5450-470e-8c20-f6a0e7d8b4b8] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.072651904s
--- PASS: TestAddons/parallel/Headlamp (10.04s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (41.58s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-20220801224545-9849 create -f testdata/busybox.yaml
addons_test.go:612: (dbg) Run:  kubectl --context addons-20220801224545-9849 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [d1535010-7b66-4dae-ab6c-3dfe1afc6753] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [d1535010-7b66-4dae-ab6c-3dfe1afc6753] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.006181171s
addons_test.go:624: (dbg) Run:  kubectl --context addons-20220801224545-9849 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-20220801224545-9849 describe sa gcp-auth-test
addons_test.go:674: (dbg) Run:  kubectl --context addons-20220801224545-9849 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220801224545-9849 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-linux-amd64 -p addons-20220801224545-9849 addons disable gcp-auth --alsologtostderr -v=1: (6.138111406s)
addons_test.go:703: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220801224545-9849 addons enable gcp-auth
addons_test.go:703: (dbg) Done: out/minikube-linux-amd64 -p addons-20220801224545-9849 addons enable gcp-auth: (2.184434054s)
addons_test.go:709: (dbg) Run:  kubectl --context addons-20220801224545-9849 apply -f testdata/private-image.yaml
addons_test.go:716: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7c74db7cd9-dcgcn" [2d593561-43ae-4915-b4e8-7e85f0aaebc4] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7c74db7cd9-dcgcn" [2d593561-43ae-4915-b4e8-7e85f0aaebc4] Running
addons_test.go:716: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 17.005501614s
addons_test.go:722: (dbg) Run:  kubectl --context addons-20220801224545-9849 apply -f testdata/private-image-eu.yaml
addons_test.go:727: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-545d57c67f-zbgr7" [393d3abd-ddb6-45df-ac0b-d2af461f2c08] Pending
helpers_test.go:342: "private-image-eu-545d57c67f-zbgr7" [393d3abd-ddb6-45df-ac0b-d2af461f2c08] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-545d57c67f-zbgr7" [393d3abd-ddb6-45df-ac0b-d2af461f2c08] Running
addons_test.go:727: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 7.006459825s
--- PASS: TestAddons/serial/GCPAuth (41.58s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220801224545-9849
addons_test.go:134: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220801224545-9849: (20.097455528s)
addons_test.go:138: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220801224545-9849
addons_test.go:142: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220801224545-9849
--- PASS: TestAddons/StoppedEnableDisable (20.30s)

                                                
                                    
x
+
TestCertOptions (30.98s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220801231704-9849 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220801231704-9849 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (27.684368398s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220801231704-9849 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220801231704-9849 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220801231704-9849 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220801231704-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220801231704-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220801231704-9849: (2.429424003s)
--- PASS: TestCertOptions (30.98s)

                                                
                                    
x
+
TestCertExpiration (236.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220801231640-9849 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220801231640-9849 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.663789466s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220801231640-9849 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220801231640-9849 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (15.168662348s)
helpers_test.go:175: Cleaning up "cert-expiration-20220801231640-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220801231640-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220801231640-9849: (2.44996971s)
--- PASS: TestCertExpiration (236.28s)

                                                
                                    
x
+
TestForceSystemdFlag (34.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220801231709-9849 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220801231709-9849 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (28.028353575s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220801231709-9849 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220801231709-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220801231709-9849
E0801 23:17:39.697295    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220801231709-9849: (5.802802984s)
--- PASS: TestForceSystemdFlag (34.26s)

                                                
                                    
x
+
TestForceSystemdEnv (44.14s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220801231549-9849 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220801231549-9849 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.643974455s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220801231549-9849 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-20220801231549-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220801231549-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220801231549-9849: (4.130699511s)
--- PASS: TestForceSystemdEnv (44.14s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.65s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.65s)

                                                
                                    
x
+
TestErrorSpam/setup (34.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220801224930-9849 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220801224930-9849 --driver=docker  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220801224930-9849 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220801224930-9849 --driver=docker  --container-runtime=containerd: (34.779169765s)
--- PASS: TestErrorSpam/setup (34.78s)

                                                
                                    
x
+
TestErrorSpam/start (1s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 start --dry-run
--- PASS: TestErrorSpam/start (1.00s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 pause
--- PASS: TestErrorSpam/pause (1.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (20.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 stop: (20.118124385s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220801224930-9849 --log_dir /tmp/nospam-20220801224930-9849 stop
--- PASS: TestErrorSpam/stop (20.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/files/etc/test/nested/copy/9849/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.58s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220801225035-9849 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2160: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220801225035-9849 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (45.583603342s)
--- PASS: TestFunctional/serial/StartWithProxy (45.58s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.39s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220801225035-9849 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220801225035-9849 --alsologtostderr -v=8: (15.389956814s)
functional_test.go:655: soft start took 15.390608575s for "functional-20220801225035-9849" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.39s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220801225035-9849 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 cache add k8s.gcr.io/pause:3.1: (1.470821943s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 cache add k8s.gcr.io/pause:3.3: (1.584362087s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 cache add k8s.gcr.io/pause:latest: (1.19700814s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220801225035-9849 /tmp/TestFunctionalserialCacheCmdcacheadd_local2528263743/001
functional_test.go:1081: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 cache add minikube-local-cache-test:functional-20220801225035-9849
functional_test.go:1081: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 cache add minikube-local-cache-test:functional-20220801225035-9849: (1.908617517s)
functional_test.go:1086: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 cache delete minikube-local-cache-test:functional-20220801225035-9849
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220801225035-9849
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (355.607474ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 cache reload: (1.253471574s)
functional_test.go:1155: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 kubectl -- --context functional-20220801225035-9849 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220801225035-9849 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (72.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220801225035-9849 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0801 22:52:39.696309    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 22:52:39.702373    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 22:52:39.712632    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 22:52:39.732893    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 22:52:39.773130    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 22:52:39.853434    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 22:52:40.013934    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 22:52:40.334498    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 22:52:40.975454    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 22:52:42.255713    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 22:52:44.817490    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 22:52:49.938522    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
functional_test.go:749: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220801225035-9849 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m12.831611625s)
functional_test.go:753: restart took 1m12.831724344s for "functional-20220801225035-9849" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (72.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220801225035-9849 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 logs
E0801 22:53:00.179225    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
functional_test.go:1228: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 logs: (1.10439687s)
--- PASS: TestFunctional/serial/LogsCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220801225035-9849 config get cpus: exit status 14 (86.611189ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220801225035-9849 config get cpus: exit status 14 (81.686016ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220801225035-9849 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220801225035-9849 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 46838: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.05s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220801225035-9849 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:966: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220801225035-9849 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (222.57995ms)

                                                
                                                
-- stdout --
	* [functional-20220801225035-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14695
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 22:53:30.312351   45627 out.go:296] Setting OutFile to fd 1 ...
	I0801 22:53:30.312503   45627 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 22:53:30.312521   45627 out.go:309] Setting ErrFile to fd 2...
	I0801 22:53:30.312529   45627 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 22:53:30.312793   45627 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 22:53:30.313572   45627 out.go:303] Setting JSON to false
	I0801 22:53:30.314736   45627 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2160,"bootTime":1659392250,"procs":592,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0801 22:53:30.314798   45627 start.go:125] virtualization: kvm guest
	I0801 22:53:30.316996   45627 out.go:177] * [functional-20220801225035-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0801 22:53:30.318362   45627 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 22:53:30.319532   45627 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 22:53:30.320693   45627 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 22:53:30.322737   45627 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 22:53:30.324034   45627 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0801 22:53:30.325589   45627 config.go:180] Loaded profile config "functional-20220801225035-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 22:53:30.325942   45627 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 22:53:30.363849   45627 docker.go:137] docker version: linux-20.10.17
	I0801 22:53:30.363948   45627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 22:53:30.463461   45627 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-08-01 22:53:30.392270583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 22:53:30.463594   45627 docker.go:254] overlay module found
	I0801 22:53:30.465820   45627 out.go:177] * Using the docker driver based on existing profile
	I0801 22:53:30.467859   45627 start.go:284] selected driver: docker
	I0801 22:53:30.467878   45627 start.go:808] validating driver "docker" against &{Name:functional-20220801225035-9849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220801225035-9849 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-sec
urity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 22:53:30.467976   45627 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 22:53:30.470063   45627 out.go:177] 
	W0801 22:53:30.471297   45627 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0801 22:53:30.472414   45627 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220801225035-9849 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220801225035-9849 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220801225035-9849 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (232.567188ms)

                                                
                                                
-- stdout --
	* [functional-20220801225035-9849] minikube v1.26.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14695
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 22:53:27.472619   44521 out.go:296] Setting OutFile to fd 1 ...
	I0801 22:53:27.472731   44521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 22:53:27.472740   44521 out.go:309] Setting ErrFile to fd 2...
	I0801 22:53:27.472747   44521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 22:53:27.472905   44521 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 22:53:27.473486   44521 out.go:303] Setting JSON to false
	I0801 22:53:27.474734   44521 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2158,"bootTime":1659392250,"procs":586,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0801 22:53:27.474797   44521 start.go:125] virtualization: kvm guest
	I0801 22:53:27.477150   44521 out.go:177] * [functional-20220801225035-9849] minikube v1.26.0 sur Ubuntu 20.04 (kvm/amd64)
	I0801 22:53:27.478666   44521 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 22:53:27.479970   44521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 22:53:27.481315   44521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 22:53:27.482749   44521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 22:53:27.484263   44521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0801 22:53:27.486024   44521 config.go:180] Loaded profile config "functional-20220801225035-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 22:53:27.486444   44521 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 22:53:27.526159   44521 docker.go:137] docker version: linux-20.10.17
	I0801 22:53:27.526260   44521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 22:53:27.629030   44521 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-08-01 22:53:27.555964626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 22:53:27.629166   44521 docker.go:254] overlay module found
	I0801 22:53:27.631486   44521 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0801 22:53:27.632803   44521 start.go:284] selected driver: docker
	I0801 22:53:27.632822   44521 start.go:808] validating driver "docker" against &{Name:functional-20220801225035-9849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1659115536-14579@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220801225035-9849 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-sec
urity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0801 22:53:27.632948   44521 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 22:53:27.635178   44521 out.go:177] 
	W0801 22:53:27.636582   44521 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0801 22:53:27.637912   44521 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (11.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220801225035-9849 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220801225035-9849 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-8tkqt" [49488a3b-202f-4e4a-bf00-1563d33dab01] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54c4b5c49f-8tkqt" [49488a3b-202f-4e4a-bf00-1563d33dab01] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 7.006065723s
functional_test.go:1448: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 service list: (1.776861256s)
functional_test.go:1462: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1475: found endpoint: https://192.168.49.2:32513
functional_test.go:1490: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1510: found endpoint for hello-node: http://192.168.49.2:32513
--- PASS: TestFunctional/parallel/ServiceCmd (11.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220801225035-9849 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220801225035-9849 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-578cdc45cb-r9tvm" [e9561c91-98da-4d9f-b60e-8d0c640e16ef] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-578cdc45cb-r9tvm" [e9561c91-98da-4d9f-b60e-8d0c640e16ef] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008986016s
functional_test.go:1578: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 service hello-node-connect --url
functional_test.go:1584: found endpoint for hello-node-connect: http://192.168.49.2:30028
functional_test.go:1604: http://192.168.49.2:30028: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-578cdc45cb-r9tvm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30028
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.88s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1631: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [214e8aeb-68f5-4c70-820c-e9471e324802] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008548962s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220801225035-9849 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220801225035-9849 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220801225035-9849 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220801225035-9849 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [4b00a5aa-4fd1-4d09-bc28-0222cdc63a21] Pending
helpers_test.go:342: "sp-pod" [4b00a5aa-4fd1-4d09-bc28-0222cdc63a21] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [4b00a5aa-4fd1-4d09-bc28-0222cdc63a21] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.006271156s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220801225035-9849 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220801225035-9849 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220801225035-9849 delete -f testdata/storage-provisioner/pod.yaml: (2.636573969s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220801225035-9849 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [41cc2e1b-be3a-45c0-b679-529d1303a90b] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [41cc2e1b-be3a-45c0-b679-529d1303a90b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [41cc2e1b-be3a-45c0-b679-529d1303a90b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.007080889s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220801225035-9849 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.63s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh -n functional-20220801225035-9849 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 cp functional-20220801225035-9849:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2267492657/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh -n functional-20220801225035-9849 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220801225035-9849 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-lr8cd" [91321a12-7c5a-4ff0-af4e-e531c9350428] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-lr8cd" [91321a12-7c5a-4ff0-af4e-e531c9350428] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.067688414s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220801225035-9849 exec mysql-67f7d69d8b-lr8cd -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220801225035-9849 exec mysql-67f7d69d8b-lr8cd -- mysql -ppassword -e "show databases;": exit status 1 (317.288012ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220801225035-9849 exec mysql-67f7d69d8b-lr8cd -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220801225035-9849 exec mysql-67f7d69d8b-lr8cd -- mysql -ppassword -e "show databases;": exit status 1 (246.384839ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220801225035-9849 exec mysql-67f7d69d8b-lr8cd -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220801225035-9849 exec mysql-67f7d69d8b-lr8cd -- mysql -ppassword -e "show databases;": exit status 1 (255.499559ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220801225035-9849 exec mysql-67f7d69d8b-lr8cd -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220801225035-9849 exec mysql-67f7d69d8b-lr8cd -- mysql -ppassword -e "show databases;": exit status 1 (213.742544ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220801225035-9849 exec mysql-67f7d69d8b-lr8cd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.97s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/9849/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo cat /etc/test/nested/copy/9849/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/9849.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo cat /etc/ssl/certs/9849.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/9849.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo cat /usr/share/ca-certificates/9849.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/98492.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo cat /etc/ssl/certs/98492.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/98492.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo cat /usr/share/ca-certificates/98492.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220801225035-9849 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo systemctl is-active docker": exit status 1 (456.9989ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo systemctl is-active crio": exit status 1 (421.768687ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 version --short
--- PASS: TestFunctional/parallel/Version/short (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 version -o=json --components: (1.060467418s)
--- PASS: TestFunctional/parallel/Version/components (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.7
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.24.3
k8s.gcr.io/kube-proxy:v1.24.3
k8s.gcr.io/kube-controller-manager:v1.24.3
k8s.gcr.io/kube-apiserver:v1.24.3
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220801225035-9849
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220801225035-9849
docker.io/kindest/kindnetd:v20220726-ed811e41
docker.io/kindest/kindnetd:v20220510-4929dd75
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20220726-ed811e41             | sha256:d921ce | 25.8MB |
| gcr.io/google-containers/addon-resizer      | functional-20220801225035-9849 | sha256:ffd4cf | 10.8MB |
| k8s.gcr.io/kube-scheduler                   | v1.24.3                        | sha256:3a5aa3 | 15.5MB |
| k8s.gcr.io/pause                            | 3.3                            | sha256:0184c1 | 298kB  |
| k8s.gcr.io/pause                            | 3.7                            | sha256:221177 | 311kB  |
| docker.io/kindest/kindnetd                  | v20220510-4929dd75             | sha256:6fb66c | 45.2MB |
| docker.io/library/nginx                     | alpine                         | sha256:e46bcc | 10.2MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/etcd                             | 3.5.3-0                        | sha256:aebe75 | 102MB  |
| docker.io/library/minikube-local-cache-test | functional-20220801225035-9849 | sha256:953320 | 1.74kB |
| k8s.gcr.io/echoserver                       | 1.8                            | sha256:82e4c8 | 46.2MB |
| k8s.gcr.io/kube-proxy                       | v1.24.3                        | sha256:2ae1ba | 39.5MB |
| k8s.gcr.io/pause                            | 3.1                            | sha256:da86e6 | 315kB  |
| k8s.gcr.io/pause                            | latest                         | sha256:350b16 | 72.3kB |
| docker.io/library/mysql                     | 5.7                            | sha256:314749 | 128MB  |
| docker.io/library/nginx                     | latest                         | sha256:670dcc | 56.7MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | sha256:a4ca41 | 13.6MB |
| k8s.gcr.io/kube-apiserver                   | v1.24.3                        | sha256:d521dd | 33.8MB |
| k8s.gcr.io/kube-controller-manager          | v1.24.3                        | sha256:586c11 | 31MB   |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls --format json:
[{"id":"sha256:6fb66cd78abfe9e0735a9a751f2586b7984e0d279e87fa8dd175781de6595627","repoDigests":["docker.io/kindest/kindnetd@sha256:39494477a3fa001aae716b704a8991f4f62d2ccf1aaaa65692da6c805b18856c"],"repoTags":["docker.io/kindest/kindnetd:v20220510-4929dd75"],"size":"45239873"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302","repoDigests":["k8s.gcr.io/kube-proxy@sha256:c1b135231b5b1a6799346cd701da4b59e5b7ef8e694ec7b04fb23b8dbe144137"],"repoTags":["k8s.gcr.io/kube-proxy:v1.24.3"],"size":"39515847"},{"id":"sha256:d921cee8494827575ce8b9cc6cf7dae988b6378ce3f62217bf430467916529b9","repoDigests":["docker.io/kindest/kindnetd@sha256:e2d4d675dcf28a90102ad5219b75c5a0ee096c4321247dfae31dd1467611a9fb"],"repoTags":["docker.io/kindest/kindnetd:v20220726-ed811e41"],"size":"25818452"},{"id":"sha256:670dcc86b69df89a9d5a9e1a7ae5b8f67619c1c74
e19de8a35f57d6c06505fd4","repoDigests":["docker.io/library/nginx@sha256:bd06dfe1f8f7758debd49d3876023992d41842fd8921565aed315a678a309982"],"repoTags":["docker.io/library/nginx:latest"],"size":"56729488"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","repoDigests":["k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5"],"repoTags":["k8s.gcr.io/etcd:3.5.3-0"],"size":"102143581"},{"id":"sha256:d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db","repoDigests":["k8s.gcr.io/kube-apiserver@sha256:a04609b85962da7e6531d32b75f652b4fb9f5fe0b0ee0aa160856faad8ec5d96"],"repoTags":["k8s.gcr.io/kube-apiserver:v1.24.3"],"size":"33796659"},{"id":"sha256:0184c1613d9
2931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f","repoDigests":["k8s.gcr.io/kube-controller-manager@sha256:f504eead8b8674ebc9067370ef51abbdc531b4a81813bfe464abccb8c76b6a53"],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.24.3"],"size":"31035788"},{"id":"sha256:953320016496480176aa370c755dd0a494e6b690142ceac3dc10dc9e53991a7b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220801225035-9849"],"size":"1738"},{"id":"sha256:3147495b3a5ce957dee2319099a8808c1418e0b0a2c82c9b2396c5fb4b688509","repoDigests":["docker.io/library/mysql@sha256:b3a86578a582617214477d91e47e850f9e18df0b5d1644fb2d96d91a340b8972"],"repoTags":["docker.io/library/mysql:5.7"],"size":"128384456"},{"id":
"sha256:e46bcc69753105cfd75905056666b92cee0d3e96ebf134b19f1b38de53cda93e","repoDigests":["docker.io/library/nginx@sha256:9c2030e1ff2c3fef7440a7fb69475553e548b9685683bdbf669ac0829b889d5f"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10205078"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220801225035-9849"],"size":"10823156"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":["k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e"],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"13585107"},{"id":"sha256:82e4c8a736a4fcf22b5
ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0","repoDigests":["k8s.gcr.io/kube-scheduler@sha256:e199523298224cd9f2a9a43c7c2c37fa57aff87648ed1e1de9984eba6f6005f0"],"repoTags":["k8s.gcr.io/kube-scheduler:v1.24.3"],"size":"15488985"},{"id":"sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165","repoDigests":["k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c"],"repoTags":["k8s.gcr.io/pause:3.7"],"size":"311278"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls --format yaml:
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0
repoDigests:
- k8s.gcr.io/kube-scheduler@sha256:e199523298224cd9f2a9a43c7c2c37fa57aff87648ed1e1de9984eba6f6005f0
repoTags:
- k8s.gcr.io/kube-scheduler:v1.24.3
size: "15488985"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests:
- k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "13585107"
- id: sha256:586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f
repoDigests:
- k8s.gcr.io/kube-controller-manager@sha256:f504eead8b8674ebc9067370ef51abbdc531b4a81813bfe464abccb8c76b6a53
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.24.3
size: "31035788"
- id: sha256:3147495b3a5ce957dee2319099a8808c1418e0b0a2c82c9b2396c5fb4b688509
repoDigests:
- docker.io/library/mysql@sha256:b3a86578a582617214477d91e47e850f9e18df0b5d1644fb2d96d91a340b8972
repoTags:
- docker.io/library/mysql:5.7
size: "128384456"
- id: sha256:e46bcc69753105cfd75905056666b92cee0d3e96ebf134b19f1b38de53cda93e
repoDigests:
- docker.io/library/nginx@sha256:9c2030e1ff2c3fef7440a7fb69475553e548b9685683bdbf669ac0829b889d5f
repoTags:
- docker.io/library/nginx:alpine
size: "10205078"
- id: sha256:2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302
repoDigests:
- k8s.gcr.io/kube-proxy@sha256:c1b135231b5b1a6799346cd701da4b59e5b7ef8e694ec7b04fb23b8dbe144137
repoTags:
- k8s.gcr.io/kube-proxy:v1.24.3
size: "39515847"
- id: sha256:670dcc86b69df89a9d5a9e1a7ae5b8f67619c1c74e19de8a35f57d6c06505fd4
repoDigests:
- docker.io/library/nginx@sha256:bd06dfe1f8f7758debd49d3876023992d41842fd8921565aed315a678a309982
repoTags:
- docker.io/library/nginx:latest
size: "56729488"
- id: sha256:d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db
repoDigests:
- k8s.gcr.io/kube-apiserver@sha256:a04609b85962da7e6531d32b75f652b4fb9f5fe0b0ee0aa160856faad8ec5d96
repoTags:
- k8s.gcr.io/kube-apiserver:v1.24.3
size: "33796659"
- id: sha256:953320016496480176aa370c755dd0a494e6b690142ceac3dc10dc9e53991a7b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220801225035-9849
size: "1738"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220801225035-9849
size: "10823156"
- id: sha256:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b
repoDigests:
- k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5
repoTags:
- k8s.gcr.io/etcd:3.5.3-0
size: "102143581"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165
repoDigests:
- k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c
repoTags:
- k8s.gcr.io/pause:3.7
size: "311278"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:6fb66cd78abfe9e0735a9a751f2586b7984e0d279e87fa8dd175781de6595627
repoDigests:
- docker.io/kindest/kindnetd@sha256:39494477a3fa001aae716b704a8991f4f62d2ccf1aaaa65692da6c805b18856c
repoTags:
- docker.io/kindest/kindnetd:v20220510-4929dd75
size: "45239873"
- id: sha256:d921cee8494827575ce8b9cc6cf7dae988b6378ce3f62217bf430467916529b9
repoDigests:
- docker.io/kindest/kindnetd@sha256:e2d4d675dcf28a90102ad5219b75c5a0ee096c4321247dfae31dd1467611a9fb
repoTags:
- docker.io/kindest/kindnetd:v20220726-ed811e41
size: "25818452"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh pgrep buildkitd: exit status 1 (468.606312ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image build -t localhost/my-image:functional-20220801225035-9849 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 image build -t localhost/my-image:functional-20220801225035-9849 testdata/build: (3.613043805s)
functional_test.go:318: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20220801225035-9849 image build -t localhost/my-image:functional-20220801225035-9849 testdata/build:
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.2s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:68020218aa9d8ab5f70023820da9a3cfb3da0935eed3f4d555d74aa5672f5985 done
#8 exporting config sha256:4a2789635c3da4a137dfa2e2d019949fe8dc9adfe5a3e63c200d58e6dcb66884 0.0s done
#8 naming to localhost/my-image:functional-20220801225035-9849 done
#8 DONE 0.1s
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls
2022/08/01 22:53:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.4956233s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220801225035-9849
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "414.963866ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1324: Took "81.684932ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: Took "464.16299ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1374: Took "99.512368ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220801225035-9849

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220801225035-9849: (4.86588629s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220801225035-9849 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (21.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220801225035-9849 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [8ad729af-ac4c-4bd9-8204-9d369ab6b95a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [8ad729af-ac4c-4bd9-8204-9d369ab6b95a] Running
E0801 22:53:20.660342    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 21.006592674s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (21.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220801225035-9849

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220801225035-9849: (5.290189621s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.363727445s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220801225035-9849
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220801225035-9849

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220801225035-9849: (5.794856549s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image save gcr.io/google-containers/addon-resizer:functional-20220801225035-9849 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 image save gcr.io/google-containers/addon-resizer:functional-20220801225035-9849 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.820998307s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image rm gcr.io/google-containers/addon-resizer:functional-20220801225035-9849

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.325179341s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220801225035-9849
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220801225035-9849

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-linux-amd64 -p functional-20220801225035-9849 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220801225035-9849: (1.217927725s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220801225035-9849
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220801225035-9849 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.104.166.5 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220801225035-9849 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220801225035-9849 /tmp/TestFunctionalparallelMountCmdany-port3889764957/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1659394407643048065" to /tmp/TestFunctionalparallelMountCmdany-port3889764957/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1659394407643048065" to /tmp/TestFunctionalparallelMountCmdany-port3889764957/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1659394407643048065" to /tmp/TestFunctionalparallelMountCmdany-port3889764957/001/test-1659394407643048065
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (397.740065ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  1 22:53 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  1 22:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  1 22:53 test-1659394407643048065
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh cat /mount-9p/test-1659394407643048065

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220801225035-9849 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [bbea5292-ccdc-439e-a36e-bcf928fb2cf2] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [bbea5292-ccdc-439e-a36e-bcf928fb2cf2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [bbea5292-ccdc-439e-a36e-bcf928fb2cf2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [bbea5292-ccdc-439e-a36e-bcf928fb2cf2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.007982436s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220801225035-9849 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220801225035-9849 /tmp/TestFunctionalparallelMountCmdany-port3889764957/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220801225035-9849 /tmp/TestFunctionalparallelMountCmdspecific-port1151819329/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (364.417719ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220801225035-9849 /tmp/TestFunctionalparallelMountCmdspecific-port1151819329/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh "sudo umount -f /mount-9p": exit status 1 (354.536285ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-20220801225035-9849 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220801225035-9849 /tmp/TestFunctionalparallelMountCmdspecific-port1151819329/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220801225035-9849
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220801225035-9849
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220801225035-9849
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (73.97s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220801225351-9849 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0801 22:54:01.621575    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220801225351-9849 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m13.973430288s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (73.97s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220801225351-9849 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220801225351-9849 addons enable ingress --alsologtostderr -v=5: (10.183415473s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.18s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220801225351-9849 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (40.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:164: (dbg) Run:  kubectl --context ingress-addon-legacy-20220801225351-9849 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0801 22:55:23.542674    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
addons_test.go:164: (dbg) Done: kubectl --context ingress-addon-legacy-20220801225351-9849 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.042635659s)
addons_test.go:184: (dbg) Run:  kubectl --context ingress-addon-legacy-20220801225351-9849 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-20220801225351-9849 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [bb60fd1d-854a-4a0f-bf7e-5ddba666ed76] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [bb60fd1d-854a-4a0f-bf7e-5ddba666ed76] Running
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.005334754s
addons_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220801225351-9849 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Run:  kubectl --context ingress-addon-legacy-20220801225351-9849 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220801225351-9849 ip
addons_test.go:249: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220801225351-9849 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:258: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220801225351-9849 addons disable ingress-dns --alsologtostderr -v=1: (10.771436623s)
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220801225351-9849 addons disable ingress --alsologtostderr -v=1
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220801225351-9849 addons disable ingress --alsologtostderr -v=1: (7.264911175s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (40.38s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220801225559-9849 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220801225559-9849 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (44.943242406s)
--- PASS: TestJSONOutput/start/Command (44.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220801225559-9849 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220801225559-9849 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (20.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220801225559-9849 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220801225559-9849 --output=json --user=testUser: (20.161153564s)
--- PASS: TestJSONOutput/stop/Command (20.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.3s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220801225711-9849 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220801225711-9849 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.413151ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d9f6e3f1-5993-455b-9081-1fda19a9903a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220801225711-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7045d969-cb28-4b9f-be9e-c902cda0c30e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14695"}}
	{"specversion":"1.0","id":"1480a396-7c50-4940-b215-0be7fbb7172b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7b606fb1-0b52-4ed8-b3b9-ac761f325c64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig"}}
	{"specversion":"1.0","id":"16b280cc-5c7e-4d4c-9506-a3823e09f36b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube"}}
	{"specversion":"1.0","id":"64beb6c0-e3d6-40dc-a4b4-b1a0ddd1ede4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a2664898-8a81-4a6a-8adc-c6bec2ba511f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220801225711-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220801225711-9849
--- PASS: TestErrorJSONOutput (0.30s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.94s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220801225711-9849 --network=
E0801 22:57:39.696522    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220801225711-9849 --network=: (33.667094013s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220801225711-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220801225711-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220801225711-9849: (2.235057995s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.94s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220801225747-9849 --network=bridge
E0801 22:58:03.163180    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 22:58:03.168437    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 22:58:03.178698    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 22:58:03.198959    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 22:58:03.239200    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 22:58:03.319516    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 22:58:03.479807    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 22:58:03.800339    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 22:58:04.441261    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 22:58:05.721818    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 22:58:07.383632    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 22:58:08.282490    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 22:58:13.403516    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220801225747-9849 --network=bridge: (28.159624791s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220801225747-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220801225747-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220801225747-9849: (2.105149607s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.30s)

                                                
                                    
x
+
TestKicExistingNetwork (29.78s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220801225817-9849 --network=existing-network
E0801 22:58:23.643847    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 22:58:44.124045    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220801225817-9849 --network=existing-network: (27.444477331s)
helpers_test.go:175: Cleaning up "existing-network-20220801225817-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220801225817-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220801225817-9849: (2.117995103s)
--- PASS: TestKicExistingNetwork (29.78s)

                                                
                                    
x
+
TestKicCustomSubnet (29.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220801225847-9849 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220801225847-9849 --subnet=192.168.60.0/24: (27.300930909s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220801225847-9849 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220801225847-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220801225847-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220801225847-9849: (2.302680625s)
--- PASS: TestKicCustomSubnet (29.64s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (64.31s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-20220801225917-9849 --driver=docker  --container-runtime=containerd
E0801 22:59:25.086506    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-20220801225917-9849 --driver=docker  --container-runtime=containerd: (34.673791317s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-20220801225917-9849 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-20220801225917-9849 --driver=docker  --container-runtime=containerd: (23.881937907s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-20220801225917-9849
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-20220801225917-9849
E0801 23:00:16.350481    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
E0801 23:00:16.355784    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
E0801 23:00:16.366013    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
E0801 23:00:16.386254    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
E0801 23:00:16.426544    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
E0801 23:00:16.506953    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
E0801 23:00:16.667518    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
E0801 23:00:16.987903    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "second-20220801225917-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-20220801225917-9849
E0801 23:00:17.628838    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
E0801 23:00:18.909311    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-20220801225917-9849: (2.021361853s)
helpers_test.go:175: Cleaning up "first-20220801225917-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-20220801225917-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-20220801225917-9849: (2.426548512s)
--- PASS: TestMinikubeProfile (64.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220801230021-9849 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0801 23:00:21.469584    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220801230021-9849 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.156088198s)
E0801 23:00:26.590677    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountFirst (5.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220801230021-9849 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220801230021-9849 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220801230021-9849 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.996109998s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220801230021-9849 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.82s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220801230021-9849 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220801230021-9849 --alsologtostderr -v=5: (1.816533891s)
--- PASS: TestMountStart/serial/DeleteFirst (1.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220801230021-9849 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220801230021-9849
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220801230021-9849: (1.274096098s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.72s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220801230021-9849
E0801 23:00:36.831521    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220801230021-9849: (5.723764798s)
--- PASS: TestMountStart/serial/RestartStopped (6.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220801230021-9849 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (91.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220801230044-9849 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0801 23:00:47.007082    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 23:00:57.312307    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
E0801 23:01:38.272945    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220801230044-9849 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m30.781181607s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (91.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- rollout status deployment/busybox: (2.591017207s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- exec busybox-d46db594c-l8872 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- exec busybox-d46db594c-lcp6r -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- exec busybox-d46db594c-l8872 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- exec busybox-d46db594c-lcp6r -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- exec busybox-d46db594c-l8872 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- exec busybox-d46db594c-lcp6r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- exec busybox-d46db594c-l8872 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- exec busybox-d46db594c-l8872 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- exec busybox-d46db594c-lcp6r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220801230044-9849 -- exec busybox-d46db594c-lcp6r -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (35.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220801230044-9849 -v 3 --alsologtostderr
E0801 23:02:39.698914    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220801230044-9849 -v 3 --alsologtostderr: (34.467979246s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (35.25s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (12.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 cp testdata/cp-test.txt multinode-20220801230044-9849:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 cp multinode-20220801230044-9849:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1813500920/001/cp-test_multinode-20220801230044-9849.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 cp multinode-20220801230044-9849:/home/docker/cp-test.txt multinode-20220801230044-9849-m02:/home/docker/cp-test_multinode-20220801230044-9849_multinode-20220801230044-9849-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849-m02 "sudo cat /home/docker/cp-test_multinode-20220801230044-9849_multinode-20220801230044-9849-m02.txt"
E0801 23:03:00.193345    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 cp multinode-20220801230044-9849:/home/docker/cp-test.txt multinode-20220801230044-9849-m03:/home/docker/cp-test_multinode-20220801230044-9849_multinode-20220801230044-9849-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849-m03 "sudo cat /home/docker/cp-test_multinode-20220801230044-9849_multinode-20220801230044-9849-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 cp testdata/cp-test.txt multinode-20220801230044-9849-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 cp multinode-20220801230044-9849-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1813500920/001/cp-test_multinode-20220801230044-9849-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 cp multinode-20220801230044-9849-m02:/home/docker/cp-test.txt multinode-20220801230044-9849:/home/docker/cp-test_multinode-20220801230044-9849-m02_multinode-20220801230044-9849.txt
E0801 23:03:03.163078    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849 "sudo cat /home/docker/cp-test_multinode-20220801230044-9849-m02_multinode-20220801230044-9849.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 cp multinode-20220801230044-9849-m02:/home/docker/cp-test.txt multinode-20220801230044-9849-m03:/home/docker/cp-test_multinode-20220801230044-9849-m02_multinode-20220801230044-9849-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849-m03 "sudo cat /home/docker/cp-test_multinode-20220801230044-9849-m02_multinode-20220801230044-9849-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 cp testdata/cp-test.txt multinode-20220801230044-9849-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 cp multinode-20220801230044-9849-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1813500920/001/cp-test_multinode-20220801230044-9849-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 cp multinode-20220801230044-9849-m03:/home/docker/cp-test.txt multinode-20220801230044-9849:/home/docker/cp-test_multinode-20220801230044-9849-m03_multinode-20220801230044-9849.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849 "sudo cat /home/docker/cp-test_multinode-20220801230044-9849-m03_multinode-20220801230044-9849.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 cp multinode-20220801230044-9849-m03:/home/docker/cp-test.txt multinode-20220801230044-9849-m02:/home/docker/cp-test_multinode-20220801230044-9849-m03_multinode-20220801230044-9849-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 ssh -n multinode-20220801230044-9849-m02 "sudo cat /home/docker/cp-test_multinode-20220801230044-9849-m03_multinode-20220801230044-9849-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (12.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220801230044-9849 node stop m03: (1.269611077s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220801230044-9849 status: exit status 7 (594.580153ms)

                                                
                                                
-- stdout --
	multinode-20220801230044-9849
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220801230044-9849-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220801230044-9849-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220801230044-9849 status --alsologtostderr: exit status 7 (595.788811ms)

                                                
                                                
-- stdout --
	multinode-20220801230044-9849
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220801230044-9849-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220801230044-9849-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 23:03:11.031134  102175 out.go:296] Setting OutFile to fd 1 ...
	I0801 23:03:11.031228  102175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:03:11.031237  102175 out.go:309] Setting ErrFile to fd 2...
	I0801 23:03:11.031241  102175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:03:11.031347  102175 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 23:03:11.031500  102175 out.go:303] Setting JSON to false
	I0801 23:03:11.031518  102175 mustload.go:65] Loading cluster: multinode-20220801230044-9849
	I0801 23:03:11.031846  102175 config.go:180] Loaded profile config "multinode-20220801230044-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 23:03:11.031867  102175 status.go:253] checking status of multinode-20220801230044-9849 ...
	I0801 23:03:11.032250  102175 cli_runner.go:164] Run: docker container inspect multinode-20220801230044-9849 --format={{.State.Status}}
	I0801 23:03:11.065198  102175 status.go:328] multinode-20220801230044-9849 host status = "Running" (err=<nil>)
	I0801 23:03:11.065227  102175 host.go:66] Checking if "multinode-20220801230044-9849" exists ...
	I0801 23:03:11.065457  102175 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220801230044-9849
	I0801 23:03:11.096612  102175 host.go:66] Checking if "multinode-20220801230044-9849" exists ...
	I0801 23:03:11.096893  102175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 23:03:11.096937  102175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220801230044-9849
	I0801 23:03:11.127991  102175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49227 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/multinode-20220801230044-9849/id_rsa Username:docker}
	I0801 23:03:11.206538  102175 ssh_runner.go:195] Run: systemctl --version
	I0801 23:03:11.209797  102175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 23:03:11.218320  102175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 23:03:11.320788  102175 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-08-01 23:03:11.248130841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 23:03:11.321637  102175 kubeconfig.go:92] found "multinode-20220801230044-9849" server: "https://192.168.58.2:8443"
	I0801 23:03:11.321670  102175 api_server.go:165] Checking apiserver status ...
	I0801 23:03:11.321709  102175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0801 23:03:11.330395  102175 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1213/cgroup
	I0801 23:03:11.337214  102175 api_server.go:181] apiserver freezer: "12:freezer:/docker/61fa0ed005b7f0060dcd0271e2af74dbf4c6d31d6d3ce5d9a3c7c5a30d926385/kubepods/burstable/pod81b8e5e31c72f18cf630005ce4a324d5/06753fad7ee573ee1511ed9a98f77fd09419df87b9d3fcf6b1c7cdd0afc7bb80"
	I0801 23:03:11.337255  102175 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/61fa0ed005b7f0060dcd0271e2af74dbf4c6d31d6d3ce5d9a3c7c5a30d926385/kubepods/burstable/pod81b8e5e31c72f18cf630005ce4a324d5/06753fad7ee573ee1511ed9a98f77fd09419df87b9d3fcf6b1c7cdd0afc7bb80/freezer.state
	I0801 23:03:11.343236  102175 api_server.go:203] freezer state: "THAWED"
	I0801 23:03:11.343260  102175 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0801 23:03:11.347860  102175 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0801 23:03:11.347882  102175 status.go:419] multinode-20220801230044-9849 apiserver status = Running (err=<nil>)
	I0801 23:03:11.347904  102175 status.go:255] multinode-20220801230044-9849 status: &{Name:multinode-20220801230044-9849 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0801 23:03:11.347922  102175 status.go:253] checking status of multinode-20220801230044-9849-m02 ...
	I0801 23:03:11.348133  102175 cli_runner.go:164] Run: docker container inspect multinode-20220801230044-9849-m02 --format={{.State.Status}}
	I0801 23:03:11.379728  102175 status.go:328] multinode-20220801230044-9849-m02 host status = "Running" (err=<nil>)
	I0801 23:03:11.379749  102175 host.go:66] Checking if "multinode-20220801230044-9849-m02" exists ...
	I0801 23:03:11.379996  102175 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220801230044-9849-m02
	I0801 23:03:11.411521  102175 host.go:66] Checking if "multinode-20220801230044-9849-m02" exists ...
	I0801 23:03:11.411828  102175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0801 23:03:11.411871  102175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220801230044-9849-m02
	I0801 23:03:11.444828  102175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49232 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/machines/multinode-20220801230044-9849-m02/id_rsa Username:docker}
	I0801 23:03:11.522480  102175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0801 23:03:11.531127  102175 status.go:255] multinode-20220801230044-9849-m02 status: &{Name:multinode-20220801230044-9849-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0801 23:03:11.531170  102175 status.go:253] checking status of multinode-20220801230044-9849-m03 ...
	I0801 23:03:11.531389  102175 cli_runner.go:164] Run: docker container inspect multinode-20220801230044-9849-m03 --format={{.State.Status}}
	I0801 23:03:11.564372  102175 status.go:328] multinode-20220801230044-9849-m03 host status = "Stopped" (err=<nil>)
	I0801 23:03:11.564393  102175 status.go:341] host is not running, skipping remaining checks
	I0801 23:03:11.564399  102175 status.go:255] multinode-20220801230044-9849-m03 status: &{Name:multinode-20220801230044-9849-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 node start m03 --alsologtostderr
E0801 23:03:30.847516    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220801230044-9849 node start m03 --alsologtostderr: (30.069387092s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (171.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220801230044-9849
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220801230044-9849
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220801230044-9849: (41.252021975s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220801230044-9849 --wait=true -v=8 --alsologtostderr
E0801 23:05:16.350843    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
E0801 23:05:44.034095    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220801230044-9849 --wait=true -v=8 --alsologtostderr: (2m10.427506008s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220801230044-9849
--- PASS: TestMultiNode/serial/RestartKeepsNodes (171.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220801230044-9849 node delete m03: (4.438151382s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220801230044-9849 stop: (40.090311048s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220801230044-9849 status: exit status 7 (127.256202ms)

                                                
                                                
-- stdout --
	multinode-20220801230044-9849
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220801230044-9849-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220801230044-9849 status --alsologtostderr: exit status 7 (126.892748ms)

                                                
                                                
-- stdout --
	multinode-20220801230044-9849
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220801230044-9849-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 23:07:19.692042  113030 out.go:296] Setting OutFile to fd 1 ...
	I0801 23:07:19.692170  113030 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:07:19.692184  113030 out.go:309] Setting ErrFile to fd 2...
	I0801 23:07:19.692191  113030 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:07:19.692301  113030 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 23:07:19.692461  113030 out.go:303] Setting JSON to false
	I0801 23:07:19.692482  113030 mustload.go:65] Loading cluster: multinode-20220801230044-9849
	I0801 23:07:19.692806  113030 config.go:180] Loaded profile config "multinode-20220801230044-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 23:07:19.692822  113030 status.go:253] checking status of multinode-20220801230044-9849 ...
	I0801 23:07:19.693178  113030 cli_runner.go:164] Run: docker container inspect multinode-20220801230044-9849 --format={{.State.Status}}
	I0801 23:07:19.725426  113030 status.go:328] multinode-20220801230044-9849 host status = "Stopped" (err=<nil>)
	I0801 23:07:19.725448  113030 status.go:341] host is not running, skipping remaining checks
	I0801 23:07:19.725453  113030 status.go:255] multinode-20220801230044-9849 status: &{Name:multinode-20220801230044-9849 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0801 23:07:19.725486  113030 status.go:253] checking status of multinode-20220801230044-9849-m02 ...
	I0801 23:07:19.725721  113030 cli_runner.go:164] Run: docker container inspect multinode-20220801230044-9849-m02 --format={{.State.Status}}
	I0801 23:07:19.757419  113030 status.go:328] multinode-20220801230044-9849-m02 host status = "Stopped" (err=<nil>)
	I0801 23:07:19.757443  113030 status.go:341] host is not running, skipping remaining checks
	I0801 23:07:19.757449  113030 status.go:255] multinode-20220801230044-9849-m02 status: &{Name:multinode-20220801230044-9849-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (106.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220801230044-9849 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0801 23:07:39.696681    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 23:08:03.163594    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
E0801 23:09:02.744461    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220801230044-9849 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m45.481343467s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220801230044-9849 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (106.19s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220801230044-9849
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220801230044-9849-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220801230044-9849-m02 --driver=docker  --container-runtime=containerd: exit status 14 (83.192719ms)

                                                
                                                
-- stdout --
	* [multinode-20220801230044-9849-m02] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14695
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220801230044-9849-m02' is duplicated with machine name 'multinode-20220801230044-9849-m02' in profile 'multinode-20220801230044-9849'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220801230044-9849-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220801230044-9849-m03 --driver=docker  --container-runtime=containerd: (23.733630305s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220801230044-9849
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220801230044-9849: exit status 80 (348.671384ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220801230044-9849
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220801230044-9849-m03 already exists in multinode-20220801230044-9849-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220801230044-9849-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220801230044-9849-m03: (2.279647335s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.51s)

                                                
                                    
x
+
TestPreload (115.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220801230936-9849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
E0801 23:10:16.351000    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220801230936-9849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m5.833419035s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220801230936-9849 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220801230936-9849 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.907255912s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220801230936-9849 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220801230936-9849 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (44.579967735s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220801230936-9849 -- sudo crictl image ls
helpers_test.go:175: Cleaning up "test-preload-20220801230936-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220801230936-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220801230936-9849: (2.458570232s)
--- PASS: TestPreload (115.16s)

                                                
                                    
x
+
TestScheduledStopUnix (101.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220801231131-9849 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220801231131-9849 --memory=2048 --driver=docker  --container-runtime=containerd: (24.628464498s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220801231131-9849 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220801231131-9849 -n scheduled-stop-20220801231131-9849
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220801231131-9849 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220801231131-9849 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220801231131-9849 -n scheduled-stop-20220801231131-9849
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220801231131-9849
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220801231131-9849 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0801 23:12:39.696876    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 23:13:03.165241    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220801231131-9849
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220801231131-9849: exit status 7 (93.759365ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220801231131-9849
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220801231131-9849 -n scheduled-stop-20220801231131-9849
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220801231131-9849 -n scheduled-stop-20220801231131-9849: exit status 7 (91.575611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220801231131-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220801231131-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220801231131-9849: (5.146696208s)
--- PASS: TestScheduledStopUnix (101.53s)

                                                
                                    
x
+
TestInsufficientStorage (16.3s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220801231313-9849 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220801231313-9849 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.618678345s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"efa0d491-e6b0-4c86-9739-2f6d3efcbe86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220801231313-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"578e9616-c6ee-4080-95c9-db36ed949892","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14695"}}
	{"specversion":"1.0","id":"ca8f74b1-9e43-4b86-9cd6-98b1b9d52ff0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7e81292b-7455-4744-bb68-19bf3b13ee6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig"}}
	{"specversion":"1.0","id":"bb3e020d-197c-48f1-870b-3ff06a99a973","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube"}}
	{"specversion":"1.0","id":"ee2da0cc-0ebd-4017-b4d4-5d4dea8292fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"918b01a8-426a-4333-a324-56d2936b0e04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ab2a1cbe-a7eb-4c26-9763-249af7442ecb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f1ce5000-4e02-4eb0-b52f-8da8dd86757c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"acbfa572-43a5-401a-8403-be4b4bd4967c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0b0d01c7-0cdc-445f-a870-5382c680da63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220801231313-9849 in cluster insufficient-storage-20220801231313-9849","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7da23eeb-07d8-4686-b56b-a11ad032b76b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c84dbf5-93d4-4722-b811-d32365d6014f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bcfbabdf-d8fc-4a21-bd3d-1e881dc0dd2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220801231313-9849 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220801231313-9849 --output=json --layout=cluster: exit status 7 (348.647925ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220801231313-9849","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220801231313-9849","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 23:13:23.254249  134198 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220801231313-9849" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220801231313-9849 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220801231313-9849 --output=json --layout=cluster: exit status 7 (354.19638ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220801231313-9849","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220801231313-9849","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0801 23:13:23.608929  134309 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220801231313-9849" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	E0801 23:13:23.616761  134309 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/insufficient-storage-20220801231313-9849/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220801231313-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220801231313-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220801231313-9849: (5.973264094s)
--- PASS: TestInsufficientStorage (16.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (113.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.3192414682.exe start -p running-upgrade-20220801231329-9849 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.3192414682.exe start -p running-upgrade-20220801231329-9849 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m5.147311796s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220801231329-9849 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220801231329-9849 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.877666033s)
helpers_test.go:175: Cleaning up "running-upgrade-20220801231329-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220801231329-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220801231329-9849: (4.492752975s)
--- PASS: TestRunningBinaryUpgrade (113.03s)

                                                
                                    
x
+
TestMissingContainerUpgrade (144.13s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.1588857585.exe start -p missing-upgrade-20220801231444-9849 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.1588857585.exe start -p missing-upgrade-20220801231444-9849 --memory=2200 --driver=docker  --container-runtime=containerd: (1m25.327931432s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220801231444-9849
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220801231444-9849: (12.644238176s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220801231444-9849
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220801231444-9849 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220801231444-9849 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.496484041s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220801231444-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220801231444-9849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220801231444-9849: (3.165910049s)
--- PASS: TestMissingContainerUpgrade (144.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220801231329-9849 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220801231329-9849 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (97.008863ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220801231329-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14695
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (50.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220801231329-9849 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220801231329-9849 --driver=docker  --container-runtime=containerd: (50.330846931s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220801231329-9849 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (50.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (136.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.1689058679.exe start -p stopped-upgrade-20220801231329-9849 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.1689058679.exe start -p stopped-upgrade-20220801231329-9849 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m4.350580221s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.1689058679.exe -p stopped-upgrade-20220801231329-9849 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.1689058679.exe -p stopped-upgrade-20220801231329-9849 stop: (1.340442253s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220801231329-9849 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220801231329-9849 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m10.385169958s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (136.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220801231329-9849 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220801231329-9849 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.070714562s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220801231329-9849 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220801231329-9849 status -o json: exit status 2 (447.336118ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220801231329-9849","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220801231329-9849
E0801 23:14:26.207833    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220801231329-9849: (2.372520127s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220801231329-9849 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220801231329-9849 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.895129424s)
--- PASS: TestNoKubernetes/serial/Start (4.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220801231329-9849 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220801231329-9849 "sudo systemctl is-active --quiet service kubelet": exit status 1 (441.411994ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.0387452s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.305388622s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220801231329-9849

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220801231329-9849: (1.364544521s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220801231329-9849 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220801231329-9849 --driver=docker  --container-runtime=containerd: (7.076970006s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220801231329-9849 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220801231329-9849 "sudo systemctl is-active --quiet service kubelet": exit status 1 (448.678716ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                    
x
+
TestPause/serial/Start (73.64s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220801231522-9849 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220801231522-9849 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m13.642163526s)
--- PASS: TestPause/serial/Start (73.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220801231329-9849
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20220801231329-9849: (1.024253851s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220801231634-9849 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20220801231634-9849 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (236.077701ms)

                                                
                                                
-- stdout --
	* [false-20220801231634-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14695
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0801 23:16:34.679018  172809 out.go:296] Setting OutFile to fd 1 ...
	I0801 23:16:34.679122  172809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:16:34.679134  172809 out.go:309] Setting ErrFile to fd 2...
	I0801 23:16:34.679138  172809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0801 23:16:34.679257  172809 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/bin
	I0801 23:16:34.679845  172809 out.go:303] Setting JSON to false
	I0801 23:16:34.681196  172809 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3545,"bootTime":1659392250,"procs":743,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0801 23:16:34.681264  172809 start.go:125] virtualization: kvm guest
	I0801 23:16:34.683785  172809 out.go:177] * [false-20220801231634-9849] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0801 23:16:34.685247  172809 out.go:177]   - MINIKUBE_LOCATION=14695
	I0801 23:16:34.685195  172809 notify.go:193] Checking for updates...
	I0801 23:16:34.686775  172809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0801 23:16:34.688071  172809 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/kubeconfig
	I0801 23:16:34.689319  172809 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube
	I0801 23:16:34.690489  172809 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0801 23:16:34.692082  172809 config.go:180] Loaded profile config "kubernetes-upgrade-20220801231451-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 23:16:34.692168  172809 config.go:180] Loaded profile config "missing-upgrade-20220801231444-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.0
	I0801 23:16:34.692245  172809 config.go:180] Loaded profile config "pause-20220801231522-9849": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.3
	I0801 23:16:34.692280  172809 driver.go:365] Setting default libvirt URI to qemu:///system
	I0801 23:16:34.733681  172809 docker.go:137] docker version: linux-20.10.17
	I0801 23:16:34.733784  172809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0801 23:16:34.838238  172809 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:58 SystemTime:2022-08-01 23:16:34.763741948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662443520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0801 23:16:34.838389  172809 docker.go:254] overlay module found
	I0801 23:16:34.840576  172809 out.go:177] * Using the docker driver based on user configuration
	I0801 23:16:34.841926  172809 start.go:284] selected driver: docker
	I0801 23:16:34.841941  172809 start.go:808] validating driver "docker" against <nil>
	I0801 23:16:34.841960  172809 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0801 23:16:34.844039  172809 out.go:177] 
	W0801 23:16:34.845347  172809 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0801 23:16:34.846671  172809 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-20220801231634-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20220801231634-9849
--- PASS: TestNetworkPlugins/group/false (0.49s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (15.8s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220801231522-9849 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0801 23:16:39.394502    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220801231522-9849 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (15.787091775s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (15.80s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220801231522-9849 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220801231522-9849 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220801231522-9849 --output=json --layout=cluster: exit status 2 (418.099501ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220801231522-9849","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220801231522-9849","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220801231522-9849 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.01s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220801231522-9849 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20220801231522-9849 --alsologtostderr -v=5: (1.012863022s)
--- PASS: TestPause/serial/PauseAgain (1.01s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.62s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220801231522-9849 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220801231522-9849 --alsologtostderr -v=5: (3.624490101s)
--- PASS: TestPause/serial/DeletePaused (3.62s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (5.91s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (5.784397267s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220801231522-9849
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220801231522-9849: exit status 1 (36.441997ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220801231522-9849

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (5.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (118.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220801231735-9849 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220801231735-9849 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m58.938287074s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (118.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220801231743-9849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3
E0801 23:18:03.163240    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220801231743-9849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (50.960654125s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220801231743-9849 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [cbb348d8-b54f-4d72-8886-7a9032846d3b] Pending
helpers_test.go:342: "busybox" [cbb348d8-b54f-4d72-8886-7a9032846d3b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [cbb348d8-b54f-4d72-8886-7a9032846d3b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.011977741s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220801231743-9849 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220801231743-9849 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220801231743-9849 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220801231743-9849 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220801231743-9849 --alsologtostderr -v=3: (20.167540785s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220801231743-9849 -n no-preload-20220801231743-9849
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220801231743-9849 -n no-preload-20220801231743-9849: exit status 7 (106.734121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220801231743-9849 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (312.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220801231743-9849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220801231743-9849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (5m11.896402078s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220801231743-9849 -n no-preload-20220801231743-9849
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (312.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220801231735-9849 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [fab27aad-2838-48d2-8304-cf6e0392f2bc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [fab27aad-2838-48d2-8304-cf6e0392f2bc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.011562054s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220801231735-9849 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220801231735-9849 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220801231735-9849 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220801231735-9849 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220801231735-9849 --alsologtostderr -v=3: (20.16298619s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220801231735-9849 -n old-k8s-version-20220801231735-9849
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220801231735-9849 -n old-k8s-version-20220801231735-9849: exit status 7 (111.842817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220801231735-9849 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (448.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220801231735-9849 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0801 23:20:16.350452    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220801231735-9849 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m27.571971631s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220801231735-9849 -n old-k8s-version-20220801231735-9849
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (448.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220801232037-9849 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220801232037-9849 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (55.318761854s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220801232037-9849 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [bf6b4ed4-e7a9-4851-a142-00cad0d5bf84] Pending
helpers_test.go:342: "busybox" [bf6b4ed4-e7a9-4851-a142-00cad0d5bf84] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [bf6b4ed4-e7a9-4851-a142-00cad0d5bf84] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.01094886s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220801232037-9849 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220801232037-9849 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220801232037-9849 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220801232037-9849 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220801232037-9849 --alsologtostderr -v=3: (20.18362214s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220801232037-9849 -n embed-certs-20220801232037-9849
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220801232037-9849 -n embed-certs-20220801232037-9849: exit status 7 (103.683361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220801232037-9849 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (316.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220801232037-9849 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3
E0801 23:22:39.696169    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
E0801 23:23:03.163673    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/functional-20220801225035-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220801232037-9849 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (5m15.849652066s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220801232037-9849 -n embed-certs-20220801232037-9849
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (316.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-6f9d5" [15c949bc-2637-410e-acfc-cbd90f23880d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014132718s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-6f9d5" [15c949bc-2637-410e-acfc-cbd90f23880d] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006917083s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-20220801231743-9849 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220801231743-9849 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220801231743-9849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220801231743-9849 -n no-preload-20220801231743-9849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220801231743-9849 -n no-preload-20220801231743-9849: exit status 2 (401.344215ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220801231743-9849 -n no-preload-20220801231743-9849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220801231743-9849 -n no-preload-20220801231743-9849: exit status 2 (413.706675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220801231743-9849 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220801231743-9849 -n no-preload-20220801231743-9849
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220801231743-9849 -n no-preload-20220801231743-9849
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (56.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220801232429-9849 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220801232429-9849 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (56.388029397s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (56.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220801232437-9849 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220801232437-9849 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (37.194774719s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220801232437-9849 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220801232437-9849 --alsologtostderr -v=3
E0801 23:25:16.351300    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220801232437-9849 --alsologtostderr -v=3: (20.269422194s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220801232429-9849 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [5fcd6ba3-2b31-4422-95f8-db2a9c8281b4] Pending
helpers_test.go:342: "busybox" [5fcd6ba3-2b31-4422-95f8-db2a9c8281b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [5fcd6ba3-2b31-4422-95f8-db2a9c8281b4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.012618812s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220801232429-9849 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220801232429-9849 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220801232429-9849 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220801232429-9849 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220801232429-9849 --alsologtostderr -v=3: (20.289434636s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220801232437-9849 -n newest-cni-20220801232437-9849
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220801232437-9849 -n newest-cni-20220801232437-9849: exit status 7 (125.016758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220801232437-9849 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220801232437-9849 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3
E0801 23:25:42.745115    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220801232437-9849 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (30.911298917s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220801232437-9849 -n newest-cni-20220801232437-9849
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220801232429-9849 -n default-k8s-different-port-20220801232429-9849
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220801232429-9849 -n default-k8s-different-port-20220801232429-9849: exit status 7 (107.676793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220801232429-9849 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (560.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220801232429-9849 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220801232429-9849 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.3: (9m20.068585865s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220801232429-9849 -n default-k8s-different-port-20220801232429-9849
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (560.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220801232437-9849 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220801232437-9849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220801232437-9849 -n newest-cni-20220801232437-9849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220801232437-9849 -n newest-cni-20220801232437-9849: exit status 2 (438.73303ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220801232437-9849 -n newest-cni-20220801232437-9849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220801232437-9849 -n newest-cni-20220801232437-9849: exit status 2 (434.39982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220801232437-9849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220801232437-9849 -n newest-cni-20220801232437-9849
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220801232437-9849 -n newest-cni-20220801232437-9849
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220801231634-9849 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220801231634-9849 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (47.002698932s)
--- PASS: TestNetworkPlugins/group/auto/Start (47.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220801231634-9849 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220801231634-9849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-7p4ml" [74ead861-cc4a-4840-9fd4-b27b81a15225] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-7p4ml" [74ead861-cc4a-4840-9fd4-b27b81a15225] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005938292s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220801231634-9849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220801231634-9849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220801231634-9849 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220801231634-9849 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (59.780049572s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-6rqj6" [2cd59aa8-f118-473c-9fad-d8f9537613a3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013615002s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-6rqj6" [2cd59aa8-f118-473c-9fad-d8f9537613a3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006545921s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-20220801232037-9849 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220801232037-9849 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220801232037-9849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220801232037-9849 -n embed-certs-20220801232037-9849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220801232037-9849 -n embed-certs-20220801232037-9849: exit status 2 (385.975897ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220801232037-9849 -n embed-certs-20220801232037-9849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220801232037-9849 -n embed-certs-20220801232037-9849: exit status 2 (388.096116ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220801232037-9849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220801232037-9849 -n embed-certs-20220801232037-9849

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220801232037-9849 -n embed-certs-20220801232037-9849
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-dx56l" [2914bc0b-70b0-43f7-afd8-5f1dde198b4d] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012600902s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (74.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220801231635-9849 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220801231635-9849 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m14.804654537s)
--- PASS: TestNetworkPlugins/group/cilium/Start (74.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-dx56l" [2914bc0b-70b0-43f7-afd8-5f1dde198b4d] Running
E0801 23:27:39.696438    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/addons-20220801224545-9849/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006402546s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220801231735-9849 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220801231735-9849 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220801231735-9849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-20220801231735-9849 --alsologtostderr -v=1: (1.205960599s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220801231735-9849 -n old-k8s-version-20220801231735-9849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220801231735-9849 -n old-k8s-version-20220801231735-9849: exit status 2 (551.060858ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220801231735-9849 -n old-k8s-version-20220801231735-9849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220801231735-9849 -n old-k8s-version-20220801231735-9849: exit status 2 (470.702691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220801231735-9849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220801231735-9849 -n old-k8s-version-20220801231735-9849
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220801231735-9849 -n old-k8s-version-20220801231735-9849
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-chrlx" [009623f2-7700-4f91-a59c-ddce1ea6d26a] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.012467013s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220801231634-9849 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220801231634-9849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-b769q" [59a5ad76-afbd-42d8-b385-978b8316963b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-b769q" [59a5ad76-afbd-42d8-b385-978b8316963b] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.007490033s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220801231634-9849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220801231634-9849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220801231634-9849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (301.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220801231634-9849 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0801 23:28:35.222401    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801231743-9849/client.crt: no such file or directory
E0801 23:28:35.862815    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801231743-9849/client.crt: no such file or directory
E0801 23:28:37.143496    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801231743-9849/client.crt: no such file or directory
E0801 23:28:39.704526    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801231743-9849/client.crt: no such file or directory
E0801 23:28:44.825373    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801231743-9849/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220801231634-9849 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (5m1.670877641s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (301.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-5ml8q" [b5c43280-eb4d-47dc-94d6-868286eb1a04] Running
E0801 23:28:55.066424    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801231743-9849/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.013476267s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220801231635-9849 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (10.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220801231635-9849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-24fhk" [0c3dcced-f0b9-4bbb-87c2-3eba7d0a5c36] Pending
helpers_test.go:342: "netcat-869c55b6dc-24fhk" [0c3dcced-f0b9-4bbb-87c2-3eba7d0a5c36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-24fhk" [0c3dcced-f0b9-4bbb-87c2-3eba7d0a5c36] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.005472133s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (10.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220801231635-9849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220801231635-9849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220801231635-9849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220801231634-9849 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd
E0801 23:29:15.546874    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801231743-9849/client.crt: no such file or directory
E0801 23:29:34.915760    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:29:34.921036    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:29:34.931314    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:29:34.951560    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:29:34.991878    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:29:35.072174    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:29:35.232276    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:29:35.552854    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:29:36.193184    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:29:37.473574    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:29:40.034183    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
E0801 23:29:45.155224    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220801231634-9849 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (40.667890681s)
--- PASS: TestNetworkPlugins/group/bridge/Start (40.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220801231634-9849 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220801231634-9849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-bqxl5" [2ac95d96-c327-467d-b44d-6c13bef58435] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0801 23:29:55.395806    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/old-k8s-version-20220801231735-9849/client.crt: no such file or directory
helpers_test.go:342: "netcat-869c55b6dc-bqxl5" [2ac95d96-c327-467d-b44d-6c13bef58435] Running
E0801 23:29:56.507779    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/no-preload-20220801231743-9849/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.006334341s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220801231634-9849 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220801231634-9849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-s59mm" [9870580d-589b-4ace-b82b-4ed47e5a6064] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-s59mm" [9870580d-589b-4ace-b82b-4ed47e5a6064] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.007268908s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-xkbn5" [c9d6251d-3bde-4d10-bd8b-c26f44bdaab9] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0801 23:35:16.351165    9849 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14695-3265-16c8c96838ca145d17ecca8303180c41961a99dd/.minikube/profiles/ingress-addon-legacy-20220801225351-9849/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011535072s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-xkbn5" [c9d6251d-3bde-4d10-bd8b-c26f44bdaab9] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00626922s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-different-port-20220801232429-9849 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220801232429-9849 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220510-4929dd75
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220801232429-9849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220801232429-9849 -n default-k8s-different-port-20220801232429-9849

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220801232429-9849 -n default-k8s-different-port-20220801232429-9849: exit status 2 (438.507981ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220801232429-9849 -n default-k8s-different-port-20220801232429-9849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220801232429-9849 -n default-k8s-different-port-20220801232429-9849: exit status 2 (397.645011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20220801232429-9849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220801232429-9849 -n default-k8s-different-port-20220801232429-9849
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220801232429-9849 -n default-k8s-different-port-20220801232429-9849
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (3.22s)

                                                
                                    

Test skip (23/275)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.24.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.24.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.24.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:455: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220801232429-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220801232429-9849
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-20220801231634-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20220801231634-9849
--- SKIP: TestNetworkPlugins/group/kubenet (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220801231634-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220801231634-9849
--- SKIP: TestNetworkPlugins/group/flannel (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220801231635-9849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-20220801231635-9849
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.24s)

                                                
                                    
Copied to clipboard