Test Report: none_Linux 15642

                    
                      4cf467cecc4d49355139c24bc1420f3978a367dd:2023-01-14:27426
                    
                

Test fail (1/144)

Order failed test Duration
26 TestAddons/parallel/MetricsServer 323.2
x
+
TestAddons/parallel/MetricsServer (323.2s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:364: metrics-server stabilized in 9.69671ms
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-56c6cfbdd9-tg5kv" [99b244b0-02bb-4d7b-8b98-f38c99f1949e] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008391198s
addons_test.go:372: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (59.755116ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:372: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (55.40245ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:372: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (55.545829ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:372: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (55.48278ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:372: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (55.079697ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:372: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (58.536636ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:372: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (55.674506ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:372: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (54.476274ms)

                                                
                                                
** stderr ** 
	Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

                                                
                                                
** /stderr **
addons_test.go:372: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (68.079713ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-565d847f94-hzdcg, age: 4m57.900026939s

                                                
                                                
** /stderr **
addons_test.go:372: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (70.280932ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-565d847f94-hzdcg, age: 5m55.526137141s

                                                
                                                
** /stderr **
addons_test.go:372: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (66.206146ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-565d847f94-hzdcg, age: 7m16.909353382s

                                                
                                                
** /stderr **
addons_test.go:372: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (66.886731ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-565d847f94-hzdcg, age: 7m50.042439607s

                                                
                                                
** /stderr **
addons_test.go:386: failed checking metric server: exit status 1
addons_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p minikube logs -n 25: (1.540857017s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------|------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  | User | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | root | v1.28.0 | 14 Jan 23 10:05 UTC |                     |
	|         | -p minikube --force            |          |      |         |                     |                     |
	|         | --alsologtostderr              |          |      |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |          |      |         |                     |                     |
	|         | --container-runtime=docker     |          |      |         |                     |                     |
	|         | --driver=none                  |          |      |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |      |         |                     |                     |
	| start   | -o=json --download-only        | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC |                     |
	|         | -p minikube --force            |          |      |         |                     |                     |
	|         | --alsologtostderr              |          |      |         |                     |                     |
	|         | --kubernetes-version=v1.25.3   |          |      |         |                     |                     |
	|         | --container-runtime=docker     |          |      |         |                     |                     |
	|         | --driver=none                  |          |      |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |      |         |                     |                     |
	| delete  | --all                          | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | 14 Jan 23 10:06 UTC |
	| delete  | -p minikube                    | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | 14 Jan 23 10:06 UTC |
	| delete  | -p minikube                    | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | 14 Jan 23 10:06 UTC |
	| start   | --download-only -p             | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC |                     |
	|         | minikube --alsologtostderr     |          |      |         |                     |                     |
	|         | --binary-mirror                |          |      |         |                     |                     |
	|         | http://127.0.0.1:43039         |          |      |         |                     |                     |
	|         | --driver=none                  |          |      |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |      |         |                     |                     |
	| delete  | -p minikube                    | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | 14 Jan 23 10:06 UTC |
	| start   | -p minikube --alsologtostderr  | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | 14 Jan 23 10:06 UTC |
	|         | -v=1 --memory=2048             |          |      |         |                     |                     |
	|         | --wait=true --driver=none      |          |      |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |      |         |                     |                     |
	| delete  | -p minikube                    | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | 14 Jan 23 10:07 UTC |
	| start   | -p minikube --wait=true        | minikube | root | v1.28.0 | 14 Jan 23 10:07 UTC | 14 Jan 23 10:07 UTC |
	|         | --memory=4000                  |          |      |         |                     |                     |
	|         | --alsologtostderr              |          |      |         |                     |                     |
	|         | --addons=registry              |          |      |         |                     |                     |
	|         | --addons=metrics-server        |          |      |         |                     |                     |
	|         | --addons=volumesnapshots       |          |      |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |          |      |         |                     |                     |
	|         | --addons=gcp-auth              |          |      |         |                     |                     |
	|         | --addons=cloud-spanner         |          |      |         |                     |                     |
	|         | --driver=none                  |          |      |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |      |         |                     |                     |
	|         | --addons=helm-tiller           |          |      |         |                     |                     |
	| ip      | minikube ip                    | minikube | root | v1.28.0 | 14 Jan 23 10:08 UTC | 14 Jan 23 10:08 UTC |
	| addons  | minikube addons disable        | minikube | root | v1.28.0 | 14 Jan 23 10:09 UTC | 14 Jan 23 10:09 UTC |
	|         | registry --alsologtostderr     |          |      |         |                     |                     |
	|         | -v=1                           |          |      |         |                     |                     |
	| addons  | minikube addons                | minikube | root | v1.28.0 | 14 Jan 23 10:15 UTC | 14 Jan 23 10:15 UTC |
	|         | disable metrics-server         |          |      |         |                     |                     |
	|         | --alsologtostderr -v=1         |          |      |         |                     |                     |
	|---------|--------------------------------|----------|------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:07:01
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:07:01.557042   16385 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:07:01.557159   16385 out.go:343] TERM=unknown,COLORTERM=, which probably does not support color
	I0114 10:07:01.557170   16385 out.go:309] Setting ErrFile to fd 2...
	I0114 10:07:01.557177   16385 out.go:343] TERM=unknown,COLORTERM=, which probably does not support color
	I0114 10:07:01.557291   16385 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3824/.minikube/bin
	I0114 10:07:01.557744   16385 out.go:303] Setting JSON to false
	I0114 10:07:01.558681   16385 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2969,"bootTime":1673687853,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:07:01.558750   16385 start.go:135] virtualization: kvm guest
	I0114 10:07:01.561975   16385 out.go:177] * minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	W0114 10:07:01.563910   16385 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15642-3824/.minikube/cache/preloaded-tarball: no such file or directory
	I0114 10:07:01.565554   16385 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:07:01.563993   16385 notify.go:220] Checking for updates...
	I0114 10:07:01.569022   16385 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:07:01.570978   16385 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3824/kubeconfig
	I0114 10:07:01.572748   16385 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3824/.minikube
	I0114 10:07:01.574450   16385 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:07:01.576223   16385 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:07:01.577904   16385 out.go:177] * Using the none driver based on user configuration
	I0114 10:07:01.579439   16385 start.go:294] selected driver: none
	I0114 10:07:01.579467   16385 start.go:838] validating driver "none" against <nil>
	I0114 10:07:01.579487   16385 start.go:849] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:07:01.579523   16385 start.go:1598] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0114 10:07:01.579948   16385 out.go:239] ! The 'none' driver does not respect the --memory flag
	I0114 10:07:01.580617   16385 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 10:07:01.580850   16385 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 10:07:01.580889   16385 cni.go:95] Creating CNI manager for ""
	I0114 10:07:01.580904   16385 cni.go:149] Driver none used, CNI unnecessary in this configuration, recommending no CNI
	I0114 10:07:01.580913   16385 start_flags.go:319] config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:
/var/run/socket_vmnet StaticIP:}
	I0114 10:07:01.583846   16385 out.go:177] * Starting control plane node minikube in cluster minikube
	I0114 10:07:01.585671   16385 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/config.json ...
	I0114 10:07:01.585709   16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/config.json: {Name:mkcb0f273917183e513823dd07fda69d303637e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:07:01.586018   16385 cache.go:193] Successfully downloaded all kic artifacts
	I0114 10:07:01.586044   16385 start.go:364] acquiring machines lock for minikube: {Name:mk211048cabacb95867cd61d1afd712ed43b6718 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0114 10:07:01.586097   16385 start.go:368] acquired machines lock for "minikube" in 37.13µs
	I0114 10:07:01.586112   16385 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name:m01 IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 10:07:01.586179   16385 start.go:125] createHost starting for "m01" (driver="none")
	I0114 10:07:01.588178   16385 out.go:177] * Running on localhost (CPUs=8, Memory=32101MB, Disk=297540MB) ...
	I0114 10:07:01.589937   16385 exec_runner.go:51] Run: systemctl --version
	I0114 10:07:01.592369   16385 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0114 10:07:01.592412   16385 client.go:168] LocalClient.Create starting
	I0114 10:07:01.592478   16385 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15642-3824/.minikube/certs/ca.pem
	I0114 10:07:01.592507   16385 main.go:134] libmachine: Decoding PEM data...
	I0114 10:07:01.592522   16385 main.go:134] libmachine: Parsing certificate...
	I0114 10:07:01.592571   16385 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15642-3824/.minikube/certs/cert.pem
	I0114 10:07:01.592592   16385 main.go:134] libmachine: Decoding PEM data...
	I0114 10:07:01.592603   16385 main.go:134] libmachine: Parsing certificate...
	I0114 10:07:01.592920   16385 client.go:171] LocalClient.Create took 500.368µs
	I0114 10:07:01.592944   16385 start.go:167] duration metric: libmachine.API.Create for "minikube" took 577.029µs
	I0114 10:07:01.592950   16385 start.go:300] post-start starting for "minikube" (driver="none")
	I0114 10:07:01.592979   16385 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 10:07:01.593009   16385 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 10:07:01.606636   16385 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 10:07:01.606665   16385 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 10:07:01.606675   16385 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 10:07:01.609437   16385 out.go:177] * OS release is Ubuntu 20.04.5 LTS
	I0114 10:07:01.611020   16385 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3824/.minikube/addons for local assets ...
	I0114 10:07:01.611081   16385 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3824/.minikube/files for local assets ...
	I0114 10:07:01.611103   16385 start.go:303] post-start completed in 18.146264ms
	I0114 10:07:01.611655   16385 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/config.json ...
	I0114 10:07:01.611770   16385 start.go:128] duration metric: createHost completed in 25.583846ms
	I0114 10:07:01.611782   16385 start.go:83] releasing machines lock for "minikube", held for 25.674753ms
	I0114 10:07:01.612066   16385 exec_runner.go:51] Run: cat /version.json
	I0114 10:07:01.612227   16385 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0114 10:07:01.613044   16385 start.go:377] Unable to open version.json: cat /version.json: exit status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0114 10:07:01.613161   16385 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 10:07:01.634572   16385 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0114 10:07:01.848644   16385 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0114 10:07:02.060449   16385 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0114 10:07:02.268186   16385 exec_runner.go:51] Run: sudo systemctl restart docker
	I0114 10:07:02.492085   16385 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0114 10:07:02.704277   16385 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0114 10:07:02.911788   16385 exec_runner.go:51] Run: sudo systemctl start cri-docker.socket
	I0114 10:07:02.927823   16385 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 10:07:02.927902   16385 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0114 10:07:02.929240   16385 start.go:472] Will wait 60s for crictl version
	I0114 10:07:02.929279   16385 exec_runner.go:51] Run: which crictl
	I0114 10:07:02.930223   16385 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0114 10:07:02.952849   16385 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.22
	RuntimeApiVersion:  1.41.0
	I0114 10:07:02.952908   16385 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0114 10:07:02.979235   16385 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0114 10:07:03.009113   16385 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.22 ...
	I0114 10:07:03.009188   16385 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0114 10:07:03.012256   16385 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0114 10:07:03.013707   16385 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 10:07:03.013753   16385 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0114 10:07:03.117973   16385 cni.go:95] Creating CNI manager for ""
	I0114 10:07:03.117996   16385 cni.go:149] Driver none used, CNI unnecessary in this configuration, recommending no CNI
	I0114 10:07:03.118013   16385 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 10:07:03.118034   16385 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.132.0.4 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.132.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.132.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 10:07:03.118222   16385 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.132.0.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent"
	  kubeletExtraArgs:
	    node-ip: 10.132.0.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.132.0.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 10:07:03.118333   16385 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=ubuntu-20-agent --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.132.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 10:07:03.118411   16385 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 10:07:03.128842   16385 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.25.3: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.25.3': No such file or directory
	
	Initiating transfer...
	I0114 10:07:03.128888   16385 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.25.3
	I0114 10:07:03.146125   16385 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubeadm.sha256
	I0114 10:07:03.146130   16385 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubectl.sha256
	I0114 10:07:03.146182   16385 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubelet.sha256
	I0114 10:07:03.146195   16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/cache/linux/amd64/v1.25.3/kubeadm --> /var/lib/minikube/binaries/v1.25.3/kubeadm (43802624 bytes)
	I0114 10:07:03.146213   16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/cache/linux/amd64/v1.25.3/kubectl --> /var/lib/minikube/binaries/v1.25.3/kubectl (45015040 bytes)
	I0114 10:07:03.146224   16385 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:07:03.159981   16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/cache/linux/amd64/v1.25.3/kubelet --> /var/lib/minikube/binaries/v1.25.3/kubelet (114237464 bytes)
	I0114 10:07:03.188858   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1593464049 /var/lib/minikube/binaries/v1.25.3/kubeadm
	I0114 10:07:03.195394   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1408995124 /var/lib/minikube/binaries/v1.25.3/kubectl
	I0114 10:07:03.260681   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3520884364 /var/lib/minikube/binaries/v1.25.3/kubelet
	I0114 10:07:03.349875   16385 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 10:07:03.359389   16385 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0114 10:07:03.359409   16385 exec_runner.go:207] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0114 10:07:03.359474   16385 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (522 bytes)
	I0114 10:07:03.359620   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2531836869 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0114 10:07:03.369553   16385 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0114 10:07:03.369580   16385 exec_runner.go:207] rm: /lib/systemd/system/kubelet.service
	I0114 10:07:03.369644   16385 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 10:07:03.369811   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2705885136 /lib/systemd/system/kubelet.service
	I0114 10:07:03.379894   16385 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2032 bytes)
	I0114 10:07:03.380029   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2971766811 /var/tmp/minikube/kubeadm.yaml.new
	I0114 10:07:03.389480   16385 exec_runner.go:51] Run: grep 10.132.0.4	control-plane.minikube.internal$ /etc/hosts
	I0114 10:07:03.390812   16385 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube for IP: 10.132.0.4
	I0114 10:07:03.390918   16385 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-3824/.minikube/ca.key
	I0114 10:07:03.390965   16385 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-3824/.minikube/proxy-client-ca.key
	I0114 10:07:03.391020   16385 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/client.key
	I0114 10:07:03.391034   16385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/client.crt with IP's: []
	I0114 10:07:03.536710   16385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/client.crt ...
	I0114 10:07:03.536748   16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/client.crt: {Name:mk6343e22ba0ffe4e9d25050ad02a97f1f8618c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:07:03.536932   16385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/client.key ...
	I0114 10:07:03.536946   16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/client.key: {Name:mk63fb1e63b2117f858e0e7164ffdf4dba02353f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:07:03.537026   16385 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.key.13ebe801
	I0114 10:07:03.537040   16385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.crt.13ebe801 with IP's: [10.132.0.4 10.96.0.1 127.0.0.1 10.0.0.1]
	I0114 10:07:03.839950   16385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.crt.13ebe801 ...
	I0114 10:07:03.839985   16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.crt.13ebe801: {Name:mke4088f7ffeb92284f3881ec7b5a89c34fa52c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:07:03.840158   16385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.key.13ebe801 ...
	I0114 10:07:03.840170   16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.key.13ebe801: {Name:mkcec8d9064a9a0a0afd294395a4653a24f4fb5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:07:03.840239   16385 certs.go:320] copying /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.crt.13ebe801 -> /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.crt
	I0114 10:07:03.840319   16385 certs.go:324] copying /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.key.13ebe801 -> /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.key
	I0114 10:07:03.840367   16385 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.key
	I0114 10:07:03.840381   16385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0114 10:07:04.000794   16385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.crt ...
	I0114 10:07:04.000827   16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.crt: {Name:mkfdc9a5c41c36e17134ad349c5138c80e1983c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:07:04.001010   16385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.key ...
	I0114 10:07:04.001022   16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.key: {Name:mka8059030bef3a27e7eecff0328f9d74e3cab05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:07:04.001183   16385 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3824/.minikube/certs/home/jenkins/minikube-integration/15642-3824/.minikube/certs/ca-key.pem (1679 bytes)
	I0114 10:07:04.001219   16385 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3824/.minikube/certs/home/jenkins/minikube-integration/15642-3824/.minikube/certs/ca.pem (1070 bytes)
	I0114 10:07:04.001238   16385 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3824/.minikube/certs/home/jenkins/minikube-integration/15642-3824/.minikube/certs/cert.pem (1115 bytes)
	I0114 10:07:04.001256   16385 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3824/.minikube/certs/home/jenkins/minikube-integration/15642-3824/.minikube/certs/key.pem (1679 bytes)
	I0114 10:07:04.001926   16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 10:07:04.002050   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1033223229 /var/lib/minikube/certs/apiserver.crt
	I0114 10:07:04.012848   16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 10:07:04.012975   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube66064136 /var/lib/minikube/certs/apiserver.key
	I0114 10:07:04.022814   16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 10:07:04.022933   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3799989977 /var/lib/minikube/certs/proxy-client.crt
	I0114 10:07:04.033853   16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0114 10:07:04.033979   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3117975674 /var/lib/minikube/certs/proxy-client.key
	I0114 10:07:04.045784   16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 10:07:04.046005   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube668977885 /var/lib/minikube/certs/ca.crt
	I0114 10:07:04.055947   16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0114 10:07:04.056103   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2493247125 /var/lib/minikube/certs/ca.key
	I0114 10:07:04.064867   16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 10:07:04.065031   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube795335000 /var/lib/minikube/certs/proxy-client-ca.crt
	I0114 10:07:04.075007   16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0114 10:07:04.075134   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube295349317 /var/lib/minikube/certs/proxy-client-ca.key
	I0114 10:07:04.085778   16385 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0114 10:07:04.085804   16385 exec_runner.go:207] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:07:04.085853   16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 10:07:04.085978   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube494919457 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:07:04.095086   16385 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0114 10:07:04.095191   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1527100247 /var/lib/minikube/kubeconfig
	I0114 10:07:04.105200   16385 exec_runner.go:51] Run: openssl version
	I0114 10:07:04.108182   16385 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 10:07:04.118122   16385 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:07:04.119389   16385 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:07:04.119424   16385 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:07:04.122297   16385 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 10:07:04.131897   16385 kubeadm.go:396] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:10.132.0.4 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:07:04.132039   16385 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 10:07:04.153378   16385 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 10:07:04.163613   16385 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 10:07:04.174891   16385 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0114 10:07:04.202281   16385 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:07:04.212099   16385 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 10:07:04.212138   16385 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0114 10:07:04.250668   16385 kubeadm.go:317] W0114 10:07:04.250530   16883 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 10:07:04.254691   16385 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0114 10:07:04.254720   16385 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 10:07:04.367648   16385 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 10:07:04.367689   16385 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 10:07:04.367695   16385 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 10:07:04.367699   16385 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 10:07:04.419145   16385 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 10:07:04.422609   16385 out.go:204]   - Generating certificates and keys ...
	I0114 10:07:04.422661   16385 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 10:07:04.422679   16385 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 10:07:04.467512   16385 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0114 10:07:04.815164   16385 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0114 10:07:05.021711   16385 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0114 10:07:05.297265   16385 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0114 10:07:05.338077   16385 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0114 10:07:05.338176   16385 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent] and IPs [10.132.0.4 127.0.0.1 ::1]
	I0114 10:07:05.584635   16385 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0114 10:07:05.584664   16385 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent] and IPs [10.132.0.4 127.0.0.1 ::1]
	I0114 10:07:05.642276   16385 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0114 10:07:06.099672   16385 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0114 10:07:06.173990   16385 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0114 10:07:06.174098   16385 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 10:07:06.218376   16385 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 10:07:06.371237   16385 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 10:07:06.531816   16385 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 10:07:06.655459   16385 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 10:07:06.676685   16385 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 10:07:06.678623   16385 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 10:07:06.678649   16385 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0114 10:07:06.896971   16385 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 10:07:06.899506   16385 out.go:204]   - Booting up control plane ...
	I0114 10:07:06.899539   16385 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 10:07:06.899894   16385 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 10:07:06.901234   16385 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 10:07:06.902223   16385 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 10:07:06.904356   16385 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 10:07:12.907069   16385 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002628 seconds
	I0114 10:07:12.907099   16385 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0114 10:07:12.915833   16385 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0114 10:07:13.431157   16385 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0114 10:07:13.431182   16385 kubeadm.go:317] [mark-control-plane] Marking the node ubuntu-20-agent as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0114 10:07:13.938395   16385 kubeadm.go:317] [bootstrap-token] Using token: vc8k2a.fqd42dsl4zvke0q4
	I0114 10:07:13.940967   16385 out.go:204]   - Configuring RBAC rules ...
	I0114 10:07:13.941013   16385 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0114 10:07:13.943870   16385 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0114 10:07:13.950643   16385 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0114 10:07:13.952800   16385 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0114 10:07:13.954860   16385 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0114 10:07:13.956818   16385 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0114 10:07:13.964052   16385 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0114 10:07:14.279890   16385 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0114 10:07:14.347604   16385 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0114 10:07:14.348886   16385 kubeadm.go:317] 
	I0114 10:07:14.348907   16385 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0114 10:07:14.348912   16385 kubeadm.go:317] 
	I0114 10:07:14.348916   16385 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0114 10:07:14.348921   16385 kubeadm.go:317] 
	I0114 10:07:14.348925   16385 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0114 10:07:14.348929   16385 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0114 10:07:14.348934   16385 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0114 10:07:14.348937   16385 kubeadm.go:317] 
	I0114 10:07:14.348942   16385 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0114 10:07:14.348945   16385 kubeadm.go:317] 
	I0114 10:07:14.348950   16385 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0114 10:07:14.348953   16385 kubeadm.go:317] 
	I0114 10:07:14.348957   16385 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0114 10:07:14.348961   16385 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0114 10:07:14.348973   16385 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0114 10:07:14.348977   16385 kubeadm.go:317] 
	I0114 10:07:14.348982   16385 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0114 10:07:14.348986   16385 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0114 10:07:14.348990   16385 kubeadm.go:317] 
	I0114 10:07:14.348994   16385 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token vc8k2a.fqd42dsl4zvke0q4 \
	I0114 10:07:14.348998   16385 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:ca5ae222565f0d80a07693c7c3b76e0f810307ec7292c767edf50f1957ddca19 \
	I0114 10:07:14.349002   16385 kubeadm.go:317] 	--control-plane 
	I0114 10:07:14.349006   16385 kubeadm.go:317] 
	I0114 10:07:14.349009   16385 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0114 10:07:14.349013   16385 kubeadm.go:317] 
	I0114 10:07:14.349017   16385 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token vc8k2a.fqd42dsl4zvke0q4 \
	I0114 10:07:14.349021   16385 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:ca5ae222565f0d80a07693c7c3b76e0f810307ec7292c767edf50f1957ddca19 
	I0114 10:07:14.352315   16385 cni.go:95] Creating CNI manager for ""
	I0114 10:07:14.352345   16385 cni.go:149] Driver none used, CNI unnecessary in this configuration, recommending no CNI
	I0114 10:07:14.352386   16385 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 10:07:14.352465   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:14.352483   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2023_01_14T10_07_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:14.365726   16385 ops.go:34] apiserver oom_adj: -16
	I0114 10:07:14.453656   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:15.044388   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:15.543943   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:16.043977   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:16.544007   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:17.043878   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:17.543808   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:18.043918   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:18.544168   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:19.044024   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:19.543870   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:20.044055   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:20.544746   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:21.044721   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:21.544105   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:22.044645   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:22.544632   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:23.044146   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:23.544366   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:24.044111   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:24.544695   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:25.044047   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:25.544377   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:26.044053   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:26.544325   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:27.044612   16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 10:07:27.112549   16385 kubeadm.go:1067] duration metric: took 12.760146034s to wait for elevateKubeSystemPrivileges.
	I0114 10:07:27.112581   16385 kubeadm.go:398] StartCluster complete in 22.980693946s
	I0114 10:07:27.112601   16385 settings.go:142] acquiring lock: {Name:mk762d90acf41588a398ec2dea6bc8cf96f87602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:07:27.112692   16385 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15642-3824/kubeconfig
	I0114 10:07:27.113371   16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/kubeconfig: {Name:mk2c87b79f2a73c5564b0710ce5c3222bf694f79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:07:27.628066   16385 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
	I0114 10:07:27.630870   16385 out.go:177] * Configuring local host environment ...
	I0114 10:07:27.628131   16385 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 10:07:27.628145   16385 addons.go:486] enableAddons start: toEnable=map[], additional=[registry metrics-server volumesnapshots csi-hostpath-driver gcp-auth cloud-spanner helm-tiller]
	I0114 10:07:27.628364   16385 config.go:180] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	W0114 10:07:27.632611   16385 out.go:239] * 
	W0114 10:07:27.632636   16385 out.go:239] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0114 10:07:27.632646   16385 out.go:239] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0114 10:07:27.632656   16385 out.go:239] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0114 10:07:27.632664   16385 out.go:239] * 
	W0114 10:07:27.632814   16385 out.go:239] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0114 10:07:27.632832   16385 out.go:239] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0114 10:07:27.632842   16385 out.go:239] * 
	I0114 10:07:27.632868   16385 addons.go:65] Setting volumesnapshots=true in profile "minikube"
	I0114 10:07:27.632895   16385 addons.go:227] Setting addon volumesnapshots=true in "minikube"
	W0114 10:07:27.632908   16385 out.go:239]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0114 10:07:27.632922   16385 out.go:239]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0114 10:07:27.632932   16385 out.go:239] * 
	W0114 10:07:27.632939   16385 out.go:239] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0114 10:07:27.632950   16385 host.go:66] Checking if "minikube" exists ...
	I0114 10:07:27.632967   16385 start.go:212] Will wait 6m0s for node &{Name:m01 IP:10.132.0.4 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 10:07:27.634987   16385 out.go:177] * Verifying Kubernetes components...
	I0114 10:07:27.633337   16385 addons.go:65] Setting gcp-auth=true in profile "minikube"
	I0114 10:07:27.633343   16385 addons.go:65] Setting cloud-spanner=true in profile "minikube"
	I0114 10:07:27.633340   16385 addons.go:65] Setting metrics-server=true in profile "minikube"
	I0114 10:07:27.633347   16385 addons.go:65] Setting csi-hostpath-driver=true in profile "minikube"
	I0114 10:07:27.633352   16385 addons.go:65] Setting default-storageclass=true in profile "minikube"
	I0114 10:07:27.633354   16385 addons.go:65] Setting helm-tiller=true in profile "minikube"
	I0114 10:07:27.633361   16385 addons.go:65] Setting registry=true in profile "minikube"
	I0114 10:07:27.633369   16385 addons.go:65] Setting storage-provisioner=true in profile "minikube"
	I0114 10:07:27.633709   16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
	I0114 10:07:27.637157   16385 api_server.go:165] Checking apiserver status ...
	I0114 10:07:27.637201   16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:07:27.637933   16385 addons.go:227] Setting addon csi-hostpath-driver=true in "minikube"
	I0114 10:07:27.637975   16385 mustload.go:65] Loading cluster: minikube
	I0114 10:07:27.638003   16385 host.go:66] Checking if "minikube" exists ...
	I0114 10:07:27.638083   16385 addons.go:227] Setting addon cloud-spanner=true in "minikube"
	I0114 10:07:27.638130   16385 host.go:66] Checking if "minikube" exists ...
	I0114 10:07:27.638208   16385 config.go:180] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 10:07:27.638251   16385 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0114 10:07:27.638406   16385 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:07:27.638688   16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
	I0114 10:07:27.638709   16385 addons.go:227] Setting addon registry=true in "minikube"
	I0114 10:07:27.638710   16385 addons.go:227] Setting addon metrics-server=true in "minikube"
	I0114 10:07:27.638724   16385 addons.go:227] Setting addon storage-provisioner=true in "minikube"
	W0114 10:07:27.638732   16385 addons.go:236] addon storage-provisioner should already be in state true
	I0114 10:07:27.638742   16385 host.go:66] Checking if "minikube" exists ...
	I0114 10:07:27.638749   16385 host.go:66] Checking if "minikube" exists ...
	I0114 10:07:27.638755   16385 host.go:66] Checking if "minikube" exists ...
	I0114 10:07:27.638769   16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
	I0114 10:07:27.638789   16385 api_server.go:165] Checking apiserver status ...
	I0114 10:07:27.638826   16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:07:27.638926   16385 addons.go:227] Setting addon helm-tiller=true in "minikube"
	I0114 10:07:27.638977   16385 host.go:66] Checking if "minikube" exists ...
	I0114 10:07:27.639316   16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
	I0114 10:07:27.639329   16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
	I0114 10:07:27.639335   16385 api_server.go:165] Checking apiserver status ...
	I0114 10:07:27.639338   16385 api_server.go:165] Checking apiserver status ...
	I0114 10:07:27.639349   16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
	I0114 10:07:27.638712   16385 api_server.go:165] Checking apiserver status ...
	I0114 10:07:27.639374   16385 api_server.go:165] Checking apiserver status ...
	I0114 10:07:27.639391   16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:07:27.639402   16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:07:27.639434   16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
	I0114 10:07:27.639453   16385 api_server.go:165] Checking apiserver status ...
	I0114 10:07:27.639362   16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:07:27.638689   16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
	I0114 10:07:27.639495   16385 api_server.go:165] Checking apiserver status ...
	I0114 10:07:27.639363   16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:07:27.639512   16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:07:27.638688   16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
	I0114 10:07:27.639582   16385 api_server.go:165] Checking apiserver status ...
	I0114 10:07:27.639604   16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:07:27.639479   16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:07:27.656762   16385 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent" to be "Ready" ...
	I0114 10:07:27.660679   16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
	I0114 10:07:27.660727   16385 node_ready.go:49] node "ubuntu-20-agent" has status "Ready":"True"
	I0114 10:07:27.660741   16385 node_ready.go:38] duration metric: took 3.939345ms waiting for node "ubuntu-20-agent" to be "Ready" ...
	I0114 10:07:27.660751   16385 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:07:27.661027   16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
	I0114 10:07:27.662594   16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
	I0114 10:07:27.663405   16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
	I0114 10:07:27.675779   16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
	I0114 10:07:27.676017   16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
	I0114 10:07:27.676277   16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
	I0114 10:07:27.683250   16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
	I0114 10:07:27.683320   16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
	I0114 10:07:27.683481   16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
	I0114 10:07:27.683515   16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
	I0114 10:07:27.685064   16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
	I0114 10:07:27.685108   16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
	I0114 10:07:27.686804   16385 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-hzdcg" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:27.697784   16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
	I0114 10:07:27.697857   16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
	I0114 10:07:27.699442   16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
	I0114 10:07:27.699503   16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
	I0114 10:07:27.714758   16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
	I0114 10:07:27.715265   16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
	I0114 10:07:27.721767   16385 api_server.go:203] freezer state: "THAWED"
	I0114 10:07:27.721801   16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0114 10:07:27.723843   16385 api_server.go:203] freezer state: "THAWED"
	I0114 10:07:27.723941   16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0114 10:07:27.726834   16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
	I0114 10:07:27.726891   16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
	I0114 10:07:27.727240   16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
	I0114 10:07:27.727390   16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
	I0114 10:07:27.727550   16385 api_server.go:203] freezer state: "THAWED"
	I0114 10:07:27.727567   16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0114 10:07:27.729902   16385 api_server.go:203] freezer state: "THAWED"
	I0114 10:07:27.729924   16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0114 10:07:27.734855   16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0114 10:07:27.736233   16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0114 10:07:27.740734   16385 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0114 10:07:27.738541   16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0114 10:07:27.738872   16385 addons.go:227] Setting addon default-storageclass=true in "minikube"
	I0114 10:07:27.738914   16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0114 10:07:27.747833   16385 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.2
	I0114 10:07:27.744162   16385 host.go:66] Checking if "minikube" exists ...
	I0114 10:07:27.744228   16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	W0114 10:07:27.744241   16385 addons.go:236] addon default-storageclass should already be in state true
	I0114 10:07:27.751321   16385 api_server.go:203] freezer state: "THAWED"
	I0114 10:07:27.751391   16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0114 10:07:27.751816   16385 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0114 10:07:27.751845   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0114 10:07:27.751981   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2358514389 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0114 10:07:27.752145   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0114 10:07:27.752246   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2294294542 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0114 10:07:27.754170   16385 host.go:66] Checking if "minikube" exists ...
	I0114 10:07:27.754851   16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
	I0114 10:07:27.754869   16385 api_server.go:165] Checking apiserver status ...
	I0114 10:07:27.754902   16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:07:27.756882   16385 api_server.go:203] freezer state: "THAWED"
	I0114 10:07:27.756913   16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0114 10:07:27.757082   16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
	I0114 10:07:27.757125   16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
	I0114 10:07:27.759892   16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0114 10:07:27.762890   16385 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:07:27.765440   16385 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:07:27.765470   16385 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0114 10:07:27.765485   16385 exec_runner.go:207] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:07:27.765654   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 10:07:27.765789   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2149253070 /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:07:27.762644   16385 api_server.go:203] freezer state: "THAWED"
	I0114 10:07:27.766377   16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0114 10:07:27.763517   16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
	I0114 10:07:27.763658   16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0114 10:07:27.766630   16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
	I0114 10:07:27.772427   16385 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0114 10:07:27.773146   16385 api_server.go:203] freezer state: "THAWED"
	I0114 10:07:27.775029   16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0114 10:07:27.773331   16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0114 10:07:27.779936   16385 out.go:177]   - Using image docker.io/registry:2.8.1
	I0114 10:07:27.777995   16385 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0114 10:07:27.779845   16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0114 10:07:27.783485   16385 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0114 10:07:27.785409   16385 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0114 10:07:27.787570   16385 addons.go:419] installing /etc/kubernetes/addons/registry-rc.yaml
	I0114 10:07:27.787607   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0114 10:07:27.787704   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3017411789 /etc/kubernetes/addons/registry-rc.yaml
	I0114 10:07:27.785614   16385 addons.go:419] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0114 10:07:27.787865   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0114 10:07:27.790509   16385 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0114 10:07:27.787981   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube520640777 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0114 10:07:27.788171   16385 addons.go:419] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0114 10:07:27.792298   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0114 10:07:27.792455   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3813655867 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0114 10:07:27.795954   16385 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0114 10:07:27.793408   16385 api_server.go:203] freezer state: "THAWED"
	I0114 10:07:27.794432   16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:07:27.796387   16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
	I0114 10:07:27.798139   16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0114 10:07:27.800349   16385 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0114 10:07:27.805280   16385 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0114 10:07:27.809465   16385 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0114 10:07:27.807997   16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0114 10:07:27.814637   16385 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.4.8
	I0114 10:07:27.817513   16385 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0114 10:07:27.819892   16385 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0114 10:07:27.817479   16385 addons.go:419] installing /etc/kubernetes/addons/deployment.yaml
	I0114 10:07:27.818550   16385 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0114 10:07:27.819054   16385 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0114 10:07:27.821365   16385 addons.go:419] installing /etc/kubernetes/addons/registry-svc.yaml
	I0114 10:07:27.822304   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0114 10:07:27.822417   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0114 10:07:27.822431   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3257268722 /etc/kubernetes/addons/registry-svc.yaml
	I0114 10:07:27.822445   16385 addons.go:419] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0114 10:07:27.822464   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0114 10:07:27.822545   16385 addons.go:419] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0114 10:07:27.822549   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3716108104 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0114 10:07:27.822553   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4257305772 /etc/kubernetes/addons/deployment.yaml
	I0114 10:07:27.822564   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0114 10:07:27.822577   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0114 10:07:27.822648   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2050701391 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0114 10:07:27.822684   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2295067184 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0114 10:07:27.824911   16385 addons.go:419] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0114 10:07:27.824938   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0114 10:07:27.825035   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube46951008 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0114 10:07:27.833617   16385 addons.go:419] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0114 10:07:27.833733   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0114 10:07:27.833906   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1716086082 /etc/kubernetes/addons/registry-proxy.yaml
	I0114 10:07:27.834087   16385 addons.go:419] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0114 10:07:27.834120   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0114 10:07:27.834281   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3106661543 /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0114 10:07:27.836684   16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
	I0114 10:07:27.836767   16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
	I0114 10:07:27.837002   16385 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0114 10:07:27.837029   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0114 10:07:27.837128   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4055652415 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0114 10:07:27.837321   16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0114 10:07:27.842093   16385 addons.go:419] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0114 10:07:27.842118   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0114 10:07:27.842220   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4233165337 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0114 10:07:27.848175   16385 addons.go:419] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0114 10:07:27.848210   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0114 10:07:27.848328   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2093681031 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0114 10:07:27.853664   16385 addons.go:419] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0114 10:07:27.853701   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0114 10:07:27.853819   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2763555223 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0114 10:07:27.855496   16385 api_server.go:203] freezer state: "THAWED"
	I0114 10:07:27.855526   16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0114 10:07:27.857493   16385 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0114 10:07:27.857526   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0114 10:07:27.857665   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube364474976 /etc/kubernetes/addons/metrics-server-service.yaml
	I0114 10:07:27.859641   16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0114 10:07:27.863907   16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0114 10:07:27.864005   16385 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 10:07:27.864021   16385 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0114 10:07:27.864028   16385 exec_runner.go:207] rm: /etc/kubernetes/addons/storageclass.yaml
	I0114 10:07:27.864091   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 10:07:27.864202   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4091563180 /etc/kubernetes/addons/storageclass.yaml
	I0114 10:07:27.864434   16385 addons.go:419] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0114 10:07:27.864454   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0114 10:07:27.864553   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3624240237 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0114 10:07:27.866694   16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0114 10:07:27.883173   16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 10:07:27.885442   16385 addons.go:419] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0114 10:07:27.885479   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0114 10:07:27.886105   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1517598696 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0114 10:07:27.894678   16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0114 10:07:27.903981   16385 addons.go:419] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0114 10:07:27.904020   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0114 10:07:27.904147   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1357345172 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0114 10:07:27.922436   16385 addons.go:419] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0114 10:07:27.922471   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0114 10:07:27.922599   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1658521713 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0114 10:07:27.928404   16385 addons.go:419] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0114 10:07:27.928437   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0114 10:07:27.928531   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1677821458 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0114 10:07:27.970777   16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0114 10:07:27.973303   16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0114 10:07:27.973339   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0114 10:07:27.973459   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2193996750 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0114 10:07:28.029076   16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0114 10:07:28.029113   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0114 10:07:28.029240   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3161982828 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0114 10:07:28.066112   16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0114 10:07:28.066142   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0114 10:07:28.066259   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3166253901 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0114 10:07:28.105698   16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0114 10:07:28.105744   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0114 10:07:28.105885   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2629367358 /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0114 10:07:28.131811   16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0114 10:07:28.131857   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0114 10:07:28.132004   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3447136068 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0114 10:07:28.153733   16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0114 10:07:28.153812   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0114 10:07:28.153943   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2767193131 /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0114 10:07:28.175265   16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0114 10:07:28.175302   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0114 10:07:28.175420   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2955231695 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0114 10:07:28.188823   16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0114 10:07:28.741108   16385 start.go:833] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS
	I0114 10:07:28.870354   16385 addons.go:457] Verifying addon metrics-server=true in "minikube"
	I0114 10:07:28.914573   16385 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.054884841s)
	I0114 10:07:28.914607   16385 addons.go:457] Verifying addon registry=true in "minikube"
	I0114 10:07:28.917026   16385 out.go:177] * Verifying registry addon...
	I0114 10:07:28.920003   16385 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0114 10:07:28.924387   16385 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0114 10:07:28.924415   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:28.935065   16385 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.068328174s)
	I0114 10:07:28.990728   16385 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.01989352s)
	W0114 10:07:28.990771   16385 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0114 10:07:28.990791   16385 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0114 10:07:29.241006   16385 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.05211434s)
	I0114 10:07:29.241043   16385 addons.go:457] Verifying addon csi-hostpath-driver=true in "minikube"
	I0114 10:07:29.243728   16385 out.go:177] * Verifying csi-hostpath-driver addon...
	I0114 10:07:29.246723   16385 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0114 10:07:29.250477   16385 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0114 10:07:29.250496   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:29.267706   16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0114 10:07:29.428637   16385 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0114 10:07:29.428657   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:29.698824   16385 pod_ready.go:102] pod "coredns-565d847f94-hzdcg" in "kube-system" namespace has status "Ready":"False"
	I0114 10:07:29.756664   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:29.929597   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:30.255526   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:30.432860   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:30.757693   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:30.929971   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:31.261020   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:31.429887   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:31.700385   16385 pod_ready.go:102] pod "coredns-565d847f94-hzdcg" in "kube-system" namespace has status "Ready":"False"
	I0114 10:07:31.756665   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:31.930089   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:32.070822   16385 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.802992171s)
	I0114 10:07:32.256483   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:32.430438   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:32.698915   16385 pod_ready.go:92] pod "coredns-565d847f94-hzdcg" in "kube-system" namespace has status "Ready":"True"
	I0114 10:07:32.698939   16385 pod_ready.go:81] duration metric: took 5.012115453s waiting for pod "coredns-565d847f94-hzdcg" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:32.698956   16385 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-j4qdt" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:32.756361   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:32.929234   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:33.255831   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:33.429210   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:33.757099   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:33.929254   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:34.209297   16385 pod_ready.go:92] pod "coredns-565d847f94-j4qdt" in "kube-system" namespace has status "Ready":"True"
	I0114 10:07:34.209329   16385 pod_ready.go:81] duration metric: took 1.51036685s waiting for pod "coredns-565d847f94-j4qdt" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:34.209343   16385 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:34.214090   16385 pod_ready.go:92] pod "etcd-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
	I0114 10:07:34.214114   16385 pod_ready.go:81] duration metric: took 4.763853ms waiting for pod "etcd-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:34.214127   16385 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:34.219017   16385 pod_ready.go:92] pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
	I0114 10:07:34.219040   16385 pod_ready.go:81] duration metric: took 4.905219ms waiting for pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:34.219052   16385 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:34.223730   16385 pod_ready.go:92] pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
	I0114 10:07:34.223752   16385 pod_ready.go:81] duration metric: took 4.692428ms waiting for pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:34.223764   16385 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kg2xf" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:34.255805   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:34.296428   16385 pod_ready.go:92] pod "kube-proxy-kg2xf" in "kube-system" namespace has status "Ready":"True"
	I0114 10:07:34.296457   16385 pod_ready.go:81] duration metric: took 72.684129ms waiting for pod "kube-proxy-kg2xf" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:34.296471   16385 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:34.360171   16385 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0114 10:07:34.360302   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2411787160 /var/lib/minikube/google_application_credentials.json
	I0114 10:07:34.372980   16385 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0114 10:07:34.373117   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2944672869 /var/lib/minikube/google_cloud_project
	I0114 10:07:34.386372   16385 addons.go:227] Setting addon gcp-auth=true in "minikube"
	I0114 10:07:34.386487   16385 host.go:66] Checking if "minikube" exists ...
	I0114 10:07:34.387039   16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
	I0114 10:07:34.387057   16385 api_server.go:165] Checking apiserver status ...
	I0114 10:07:34.387081   16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:07:34.409026   16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
	I0114 10:07:34.421060   16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
	I0114 10:07:34.421114   16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
	I0114 10:07:34.429293   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:34.430803   16385 api_server.go:203] freezer state: "THAWED"
	I0114 10:07:34.430830   16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0114 10:07:34.435220   16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0114 10:07:34.435273   16385 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0114 10:07:34.438478   16385 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
	I0114 10:07:34.440038   16385 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.13
	I0114 10:07:34.441577   16385 addons.go:419] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0114 10:07:34.441611   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0114 10:07:34.441886   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube571050405 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0114 10:07:34.454552   16385 addons.go:419] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0114 10:07:34.454584   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0114 10:07:34.454672   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3303134801 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0114 10:07:34.465184   16385 addons.go:419] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0114 10:07:34.465217   16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5393 bytes)
	I0114 10:07:34.465335   16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube990606094 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0114 10:07:34.477202   16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0114 10:07:34.696473   16385 pod_ready.go:92] pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
	I0114 10:07:34.696495   16385 pod_ready.go:81] duration metric: took 400.017786ms waiting for pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
	I0114 10:07:34.696504   16385 pod_ready.go:38] duration metric: took 7.035739929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:07:34.696527   16385 api_server.go:51] waiting for apiserver process to appear ...
	I0114 10:07:34.696572   16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:07:34.734100   16385 api_server.go:71] duration metric: took 7.101094027s to wait for apiserver process to appear ...
	I0114 10:07:34.734128   16385 api_server.go:87] waiting for apiserver healthz status ...
	I0114 10:07:34.734142   16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
	I0114 10:07:34.738480   16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
	ok
	I0114 10:07:34.739240   16385 api_server.go:140] control plane version: v1.25.3
	I0114 10:07:34.739265   16385 api_server.go:130] duration metric: took 5.131151ms to wait for apiserver health ...
	I0114 10:07:34.739275   16385 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 10:07:34.755735   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:34.902599   16385 system_pods.go:59] 19 kube-system pods found
	I0114 10:07:34.902639   16385 system_pods.go:61] "coredns-565d847f94-hzdcg" [e29d84f5-82a6-47c0-b832-601c7c0781a9] Running
	I0114 10:07:34.902647   16385 system_pods.go:61] "coredns-565d847f94-j4qdt" [235e6f77-d3f1-4391-ad58-df166f26d492] Running
	I0114 10:07:34.902654   16385 system_pods.go:61] "csi-hostpath-attacher-0" [5bd68643-6771-472d-99e4-4015cd983d36] Pending
	I0114 10:07:34.902662   16385 system_pods.go:61] "csi-hostpath-provisioner-0" [73cb9a92-49ec-4a1a-884d-a2cbb3f3542d] Pending
	I0114 10:07:34.902669   16385 system_pods.go:61] "csi-hostpath-resizer-0" [2c1884b8-314c-4b4c-a2dc-1d9186cf0792] Pending
	I0114 10:07:34.902676   16385 system_pods.go:61] "csi-hostpath-snapshotter-0" [48f2bfa7-7661-45de-ac6f-19f41e393d0d] Pending
	I0114 10:07:34.902683   16385 system_pods.go:61] "csi-hostpathplugin-0" [7a9ea40d-11af-40dc-800a-213c03c35ebc] Pending
	I0114 10:07:34.902695   16385 system_pods.go:61] "etcd-ubuntu-20-agent" [a1b7d9bb-31d0-441d-8f46-0aa17e6541f1] Running
	I0114 10:07:34.902707   16385 system_pods.go:61] "kube-apiserver-ubuntu-20-agent" [36ae1b47-772b-425a-b340-0a9b32861e7d] Running
	I0114 10:07:34.902721   16385 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent" [df519fda-9fe2-47c1-83cf-17df66f0fb3e] Running
	I0114 10:07:34.902728   16385 system_pods.go:61] "kube-proxy-kg2xf" [26fe60cf-f9db-4fdd-af89-776e4ede4748] Running
	I0114 10:07:34.902738   16385 system_pods.go:61] "kube-scheduler-ubuntu-20-agent" [93aa1003-d1c5-4b8b-826f-83be5d5d2f29] Running
	I0114 10:07:34.902754   16385 system_pods.go:61] "metrics-server-56c6cfbdd9-tg5kv" [99b244b0-02bb-4d7b-8b98-f38c99f1949e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0114 10:07:34.902766   16385 system_pods.go:61] "registry-kq4cd" [32b72e54-cd00-412d-9956-c5373a71c06c] Pending
	I0114 10:07:34.902776   16385 system_pods.go:61] "registry-proxy-s9fw7" [1ad6757d-2230-4f49-bb63-c55e4bf5d78b] Pending
	I0114 10:07:34.902787   16385 system_pods.go:61] "snapshot-controller-67c8f9659-hb5bx" [6d8b5c82-bb84-4599-9b84-b8dc330fdb73] Pending
	I0114 10:07:34.902795   16385 system_pods.go:61] "snapshot-controller-67c8f9659-lcxlj" [44d0f1e8-c929-4513-9907-e019af13d5bd] Pending
	I0114 10:07:34.902809   16385 system_pods.go:61] "storage-provisioner" [f313a32c-e6d4-45f9-a444-4fc747ab9a81] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0114 10:07:34.902820   16385 system_pods.go:61] "tiller-deploy-696b5bfbb7-pg8sd" [930aa4f6-25af-4b84-9939-c484716e2fdf] Pending
	I0114 10:07:34.902832   16385 system_pods.go:74] duration metric: took 163.54964ms to wait for pod list to return data ...
	I0114 10:07:34.902845   16385 default_sa.go:34] waiting for default service account to be created ...
	I0114 10:07:34.928353   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:35.096651   16385 default_sa.go:45] found service account: "default"
	I0114 10:07:35.096674   16385 default_sa.go:55] duration metric: took 193.820503ms for default service account to be created ...
	I0114 10:07:35.096682   16385 system_pods.go:116] waiting for k8s-apps to be running ...
	I0114 10:07:35.213496   16385 addons.go:457] Verifying addon gcp-auth=true in "minikube"
	I0114 10:07:35.216418   16385 out.go:177] * Verifying gcp-auth addon...
	I0114 10:07:35.218699   16385 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0114 10:07:35.220953   16385 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0114 10:07:35.220971   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:35.255307   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:35.302337   16385 system_pods.go:86] 19 kube-system pods found
	I0114 10:07:35.302372   16385 system_pods.go:89] "coredns-565d847f94-hzdcg" [e29d84f5-82a6-47c0-b832-601c7c0781a9] Running
	I0114 10:07:35.302381   16385 system_pods.go:89] "coredns-565d847f94-j4qdt" [235e6f77-d3f1-4391-ad58-df166f26d492] Running
	I0114 10:07:35.302388   16385 system_pods.go:89] "csi-hostpath-attacher-0" [5bd68643-6771-472d-99e4-4015cd983d36] Pending
	I0114 10:07:35.302394   16385 system_pods.go:89] "csi-hostpath-provisioner-0" [73cb9a92-49ec-4a1a-884d-a2cbb3f3542d] Pending
	I0114 10:07:35.302400   16385 system_pods.go:89] "csi-hostpath-resizer-0" [2c1884b8-314c-4b4c-a2dc-1d9186cf0792] Pending
	I0114 10:07:35.302407   16385 system_pods.go:89] "csi-hostpath-snapshotter-0" [48f2bfa7-7661-45de-ac6f-19f41e393d0d] Pending
	I0114 10:07:35.302416   16385 system_pods.go:89] "csi-hostpathplugin-0" [7a9ea40d-11af-40dc-800a-213c03c35ebc] Pending
	I0114 10:07:35.302427   16385 system_pods.go:89] "etcd-ubuntu-20-agent" [a1b7d9bb-31d0-441d-8f46-0aa17e6541f1] Running
	I0114 10:07:35.302438   16385 system_pods.go:89] "kube-apiserver-ubuntu-20-agent" [36ae1b47-772b-425a-b340-0a9b32861e7d] Running
	I0114 10:07:35.302449   16385 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent" [df519fda-9fe2-47c1-83cf-17df66f0fb3e] Running
	I0114 10:07:35.302463   16385 system_pods.go:89] "kube-proxy-kg2xf" [26fe60cf-f9db-4fdd-af89-776e4ede4748] Running
	I0114 10:07:35.302476   16385 system_pods.go:89] "kube-scheduler-ubuntu-20-agent" [93aa1003-d1c5-4b8b-826f-83be5d5d2f29] Running
	I0114 10:07:35.302486   16385 system_pods.go:89] "metrics-server-56c6cfbdd9-tg5kv" [99b244b0-02bb-4d7b-8b98-f38c99f1949e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0114 10:07:35.302492   16385 system_pods.go:89] "registry-kq4cd" [32b72e54-cd00-412d-9956-c5373a71c06c] Pending
	I0114 10:07:35.302498   16385 system_pods.go:89] "registry-proxy-s9fw7" [1ad6757d-2230-4f49-bb63-c55e4bf5d78b] Pending
	I0114 10:07:35.302502   16385 system_pods.go:89] "snapshot-controller-67c8f9659-hb5bx" [6d8b5c82-bb84-4599-9b84-b8dc330fdb73] Pending
	I0114 10:07:35.302508   16385 system_pods.go:89] "snapshot-controller-67c8f9659-lcxlj" [44d0f1e8-c929-4513-9907-e019af13d5bd] Pending
	I0114 10:07:35.302518   16385 system_pods.go:89] "storage-provisioner" [f313a32c-e6d4-45f9-a444-4fc747ab9a81] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0114 10:07:35.302532   16385 system_pods.go:89] "tiller-deploy-696b5bfbb7-pg8sd" [930aa4f6-25af-4b84-9939-c484716e2fdf] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0114 10:07:35.302546   16385 system_pods.go:126] duration metric: took 205.857605ms to wait for k8s-apps to be running ...
	I0114 10:07:35.302559   16385 system_svc.go:44] waiting for kubelet service to be running ....
	I0114 10:07:35.302605   16385 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:07:35.316994   16385 system_svc.go:56] duration metric: took 14.427221ms WaitForService to wait for kubelet.
	I0114 10:07:35.317025   16385 kubeadm.go:573] duration metric: took 7.684025055s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0114 10:07:35.317046   16385 node_conditions.go:102] verifying NodePressure condition ...
	I0114 10:07:35.429078   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:35.496431   16385 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0114 10:07:35.496456   16385 node_conditions.go:123] node cpu capacity is 8
	I0114 10:07:35.496467   16385 node_conditions.go:105] duration metric: took 179.416679ms to run NodePressure ...
	I0114 10:07:35.496477   16385 start.go:217] waiting for startup goroutines ...
	I0114 10:07:35.724768   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:35.756726   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:35.929596   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:36.224285   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:36.256459   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:36.430727   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:36.724426   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:36.756363   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:36.928511   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:37.224237   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:37.256772   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:37.429358   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:37.724106   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:37.756451   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:37.929109   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:38.224112   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:38.256477   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:38.429415   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:38.724147   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:38.756568   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:38.930164   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:39.224480   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:39.256145   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:39.429875   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:39.725045   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:39.755866   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:39.929029   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:40.225162   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:40.256740   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:40.429800   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:40.724644   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:40.757804   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:40.929608   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:41.224266   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:41.257073   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:41.429204   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:41.724805   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:41.756623   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:41.929379   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:42.225520   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:42.257213   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:42.428752   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:42.724549   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:42.756587   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:42.929056   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:43.225074   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:43.255555   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:43.429191   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:43.725582   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:43.756443   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:43.929704   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:44.224148   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:44.255473   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:44.428426   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:44.724342   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:44.757147   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:44.929166   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0114 10:07:45.243826   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:45.256904   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:45.428901   16385 kapi.go:108] duration metric: took 16.508901147s to wait for kubernetes.io/minikube-addons=registry ...
	I0114 10:07:45.724780   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:45.756106   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:46.224657   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:46.256949   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:46.724477   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:46.756412   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:47.224364   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:47.256353   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:47.724307   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:47.756867   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:48.224715   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:48.256985   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:48.723804   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:48.755449   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:49.225145   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:49.256290   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:49.725734   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:49.756537   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:50.224800   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:50.256423   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:50.724781   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:50.758484   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:51.224670   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:51.256456   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:51.724731   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:51.780745   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:52.224532   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:52.257468   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:52.724160   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:52.756568   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:53.224226   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:53.255999   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:53.724090   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:53.755531   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:54.224741   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:54.256401   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:54.724967   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:54.756876   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:55.224543   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:55.256042   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:55.724413   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0114 10:07:55.755981   16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0114 10:07:56.224062   16385 kapi.go:108] duration metric: took 21.005361557s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0114 10:07:56.226144   16385 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0114 10:07:56.227762   16385 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0114 10:07:56.229138   16385 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0114 10:07:56.255864   16385 kapi.go:108] duration metric: took 27.009137513s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0114 10:07:56.258198   16385 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, default-storageclass, metrics-server, helm-tiller, volumesnapshots, registry, gcp-auth, csi-hostpath-driver
	I0114 10:07:56.259730   16385 addons.go:488] enableAddons completed in 28.631581802s
	I0114 10:07:56.260052   16385 exec_runner.go:51] Run: rm -f paused
	I0114 10:07:56.306675   16385 start.go:536] kubectl: 1.26.0, cluster: 1.25.3 (minor skew: 1)
	I0114 10:07:56.309012   16385 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-12-12 17:50:41 UTC, end at Sat 2023-01-14 10:15:18 UTC. --
	Jan 14 10:07:36 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:36.256064573Z" level=warning msg="reference for unknown type: " digest="sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f" remote="ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f"
	Jan 14 10:07:37 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:37.907191908Z" level=warning msg="reference for unknown type: " digest="sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4" remote="k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4"
	Jan 14 10:07:38 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:38.985604668Z" level=warning msg="reference for unknown type: " digest="sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da" remote="gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da"
	Jan 14 10:07:42 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:42.810434841Z" level=warning msg="reference for unknown type: " digest="sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2" remote="k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2"
	Jan 14 10:07:44 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:44.100372898Z" level=warning msg="reference for unknown type: " digest="sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a" remote="k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a"
	Jan 14 10:07:45 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:45.178009545Z" level=warning msg="reference for unknown type: " digest="sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02" remote="k8s.gcr.io/sig-storage/csi-external-health-monitor-agent@sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02"
	Jan 14 10:07:46 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:46.274383462Z" level=warning msg="reference for unknown type: " digest="sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782" remote="k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782"
	Jan 14 10:07:47 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:47.314940991Z" level=warning msg="reference for unknown type: " digest="sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09" remote="k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09"
	Jan 14 10:07:48 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:48.405607389Z" level=warning msg="reference for unknown type: " digest="sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068" remote="k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068"
	Jan 14 10:07:49 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:49.578029838Z" level=warning msg="reference for unknown type: " digest="sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16" remote="k8s.gcr.io/sig-storage/csi-external-health-monitor-controller@sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16"
	Jan 14 10:07:49 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:49.712333969Z" level=info msg="ignoring event" container=c15ad6fc83da60d7d36b5955dd91389b972444f8eae14dc902b5c5ae44529eca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:07:49 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:49.731251418Z" level=info msg="ignoring event" container=ec724690f5f37870fc8571365a6c9a3c73b06368d273c24009905760d2b6f68b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:07:50 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:50.703200543Z" level=warning msg="reference for unknown type: " digest="sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108" remote="k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108"
	Jan 14 10:07:50 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:50.854919320Z" level=info msg="ignoring event" container=60e7e632ec335e4dbcd63c5ba412e34e4564a68df9a1583180cbd638ab0704f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:07:51 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:51.670272266Z" level=warning msg="reference for unknown type: " digest="sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659" remote="k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659"
	Jan 14 10:07:51 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:51.819871487Z" level=info msg="ignoring event" container=9404ef703f1dcf0fcf5bd0e16eb444d48c2350b6889a3b2fcab22362fc4aa399 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:07:52 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:52.623900374Z" level=warning msg="reference for unknown type: " digest="sha256:08a49cb7a588d81723b7e02c16082c75418b6e0a54cf2e44668bd77f79a41a40" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:08a49cb7a588d81723b7e02c16082c75418b6e0a54cf2e44668bd77f79a41a40"
	Jan 14 10:07:52 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:52.847652357Z" level=info msg="ignoring event" container=f04866c0dcb3f1c7f8d5cdc9744a46e968c9c1b029a679ebcb90f76e1643abb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:07:54 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:54.918937924Z" level=warning msg="reference for unknown type: " digest="sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994" remote="k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994"
	Jan 14 10:08:08 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:08:08.518949215Z" level=info msg="ignoring event" container=05a012ae497d60bf6678ca4b3ce8d19bf52d5a1369b6059c45d5036b8b15fc9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:08:10 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:08:10.210852515Z" level=info msg="ignoring event" container=aa2a19fc7fee5562e11db743ae10464bc9bfb3524cd29be285ae35d79d9fd61a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:09:56 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:09:56.236167749Z" level=info msg="ignoring event" container=7e5fd0e68c151c52ee70cce243d5fdfb2b9d8e8ed19a413b814615d1da93e0a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:09:56 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:09:56.294169536Z" level=info msg="ignoring event" container=9d9123abb65f144d7aec82a411e226f0004a5f07dc4ae821d10cb059d1f6c64c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:09:56 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:09:56.294212513Z" level=info msg="ignoring event" container=3ca1b6a5c960498f4a5451b809daed5f7178d1f94002230b2f37f02c63271f6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:09:56 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:09:56.356260464Z" level=info msg="ignoring event" container=10cdee3dbdce5a4c43d46ba103e0a1e70e3b65cfd77d0d1710513085970ec2d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                   CREATED             STATE               NAME                                     ATTEMPT             POD ID
	e0c7cbfe335c2       k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994                            7 minutes ago       Running             liveness-probe                           0                   54faa7460c909
	01800d02aa2ea       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:08a49cb7a588d81723b7e02c16082c75418b6e0a54cf2e44668bd77f79a41a40                            7 minutes ago       Running             gcp-auth                                 0                   a36a66dac7f44
	6e39e5cd9d78d       k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659                           7 minutes ago       Running             hostpath                                 0                   54faa7460c909
	3bdbccbf8a29e       k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108                7 minutes ago       Running             node-driver-registrar                    0                   54faa7460c909
	39188bd328071       k8s.gcr.io/sig-storage/csi-external-health-monitor-controller@sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16   7 minutes ago       Running             csi-external-health-monitor-controller   0                   54faa7460c909
	458bdb236a928       k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09                             7 minutes ago       Running             csi-attacher                             0                   2dd0b15cfe812
	88caf00d90a3d       k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782                          7 minutes ago       Running             csi-snapshotter                          0                   64fff840b7280
	e998541c1561f       k8s.gcr.io/sig-storage/csi-external-health-monitor-agent@sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02        7 minutes ago       Running             csi-external-health-monitor-agent        0                   54faa7460c909
	a2763077d5802       k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a                              7 minutes ago       Running             csi-resizer                              0                   cdcce0e103ed2
	ccefe34ac25b0       k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4                      7 minutes ago       Running             volume-snapshot-controller               0                   e4990db1461fa
	f2482d6749a30       k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2                          7 minutes ago       Running             csi-provisioner                          0                   927beeae93c54
	11fa881780fb0       k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4                      7 minutes ago       Running             volume-snapshot-controller               0                   a2c2d6bba2256
	ef11c381189ff       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                             7 minutes ago       Running             tiller                                   0                   93efd01a37d26
	b508bdd8c8273       gcr.io/cloud-spanner-emulator/emulator@sha256:5469945589399bd79ead8bed929f5eb4d1c5ee98d095df5b0ebe35f0b7160a84                          7 minutes ago       Running             cloud-spanner-emulator                   0                   1928dbcf7a541
	9be7dfbede6e8       registry.k8s.io/metrics-server/metrics-server@sha256:f977ad859fb500c1302d9c3428c6271db031bb7431e7076213b676b345a88dc2                   7 minutes ago       Exited              metrics-server                           0                   d506ee694ccd4
	7c5cbae47eb40       6e38f40d628db                                                                                                                           7 minutes ago       Running             storage-provisioner                      0                   7b54ada7bbbfd
	cc9f535a05271       5185b96f0becf                                                                                                                           7 minutes ago       Running             coredns                                  0                   01847f791815e
	4f53bf8a83055       beaaf00edd38a                                                                                                                           7 minutes ago       Running             kube-proxy                               0                   73299f5498088
	ab6d08dc2097b       a8a176a5d5d69                                                                                                                           8 minutes ago       Running             etcd                                     30                  99cf335a33bbb
	2ff1699b48783       0346dbd74bcb9                                                                                                                           8 minutes ago       Running             kube-apiserver                           0                   06e63285ce53c
	604a7cca50ac7       6039992312758                                                                                                                           8 minutes ago       Running             kube-controller-manager                  35                  fcfe8b131688f
	9fb1ebe94b48a       6d23ec0e8b87e                                                                                                                           8 minutes ago       Running             kube-scheduler                           31                  28a5d33e0cb63
	
	* 
	* ==> coredns [cc9f535a0527] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration SHA512 = 7839f4272055c68eb3195e01fd465aa8d3e1d0906dde9d63a3a809e61980a8e84b23c29639a35e572df16c7c3dba67ccc987b8535eb396aa10f0126ebf95ca4d
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               ubuntu-20-agent
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_14T10_07_14_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:07:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 10:15:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 10:13:21 +0000   Sat, 14 Jan 2023 10:07:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 10:13:21 +0000   Sat, 14 Jan 2023 10:07:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 10:13:21 +0000   Sat, 14 Jan 2023 10:07:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 10:13:21 +0000   Sat, 14 Jan 2023 10:07:24 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
	Addresses:
	  InternalIP:  10.132.0.4
	  Hostname:    ubuntu-20-agent
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                591c9f12-2938-3743-e2bf-c56a050d43d1
	  Boot ID:                    d08c1bf3-58d2-42f4-a94f-b5b5e908f83a
	  Kernel Version:             5.15.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.22
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-7d7766f55c-ng2xw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	  gcp-auth                    gcp-auth-6f5c66bfb9-pjmhb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	  kube-system                 coredns-565d847f94-hzdcg                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     7m51s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 csi-hostpath-provisioner-0                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 csi-hostpath-snapshotter-0                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 csi-hostpathplugin-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 etcd-ubuntu-20-agent                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m4s
	  kube-system                 kube-apiserver-ubuntu-20-agent             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m4s
	  kube-system                 kube-controller-manager-ubuntu-20-agent    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	  kube-system                 kube-proxy-kg2xf                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m52s
	  kube-system                 kube-scheduler-ubuntu-20-agent             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m4s
	  kube-system                 metrics-server-56c6cfbdd9-tg5kv            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m50s
	  kube-system                 snapshot-controller-67c8f9659-hb5bx        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 snapshot-controller-67c8f9659-lcxlj        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	  kube-system                 tiller-deploy-696b5bfbb7-pg8sd             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m51s  kube-proxy       
	  Normal  Starting                 8m4s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m4s   kubelet          Node ubuntu-20-agent status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m4s   kubelet          Node ubuntu-20-agent status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m4s   kubelet          Node ubuntu-20-agent status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m4s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m54s  kubelet          Node ubuntu-20-agent status is now: NodeReady
	  Normal  RegisteredNode           7m53s  node-controller  Node ubuntu-20-agent event: Registered Node ubuntu-20-agent in Controller
	
	* 
	* ==> dmesg <==
	* [Jan14 09:17]  #2
	[  +0.001147]  #3
	[  +0.000951]  #4
	[  +0.003160] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001758] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001399] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.004161]  #5
	[  +0.000803]  #6
	[  +0.000759]  #7
	[  +0.058287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.436818] i8042: Warning: Keylock active
	[  +0.007559] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003340] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000697] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000662] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000730] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000684] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000723] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000673] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000645] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000627] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.125102] kauditd_printk_skb: 34 callbacks suppressed
	
	* 
	* ==> etcd [ab6d08dc2097] <==
	* {"level":"info","ts":"2023-01-14T10:07:08.485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 switched to configuration voters=(15265396265148522630)"}
	{"level":"info","ts":"2023-01-14T10:07:08.486Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"36fd114adae62b7a","local-member-id":"d3d995060bc0a086","added-peer-id":"d3d995060bc0a086","added-peer-peer-urls":["https://10.132.0.4:2380"]}
	{"level":"info","ts":"2023-01-14T10:07:08.488Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-14T10:07:08.488Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.132.0.4:2380"}
	{"level":"info","ts":"2023-01-14T10:07:08.488Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.132.0.4:2380"}
	{"level":"info","ts":"2023-01-14T10:07:08.488Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d3d995060bc0a086","initial-advertise-peer-urls":["https://10.132.0.4:2380"],"listen-peer-urls":["https://10.132.0.4:2380"],"advertise-client-urls":["https://10.132.0.4:2379"],"listen-client-urls":["https://10.132.0.4:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-14T10:07:08.488Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 is starting a new election at term 1"}
	{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 received MsgPreVoteResp from d3d995060bc0a086 at term 1"}
	{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 became candidate at term 2"}
	{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 received MsgVoteResp from d3d995060bc0a086 at term 2"}
	{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 became leader at term 2"}
	{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3d995060bc0a086 elected leader d3d995060bc0a086 at term 2"}
	{"level":"info","ts":"2023-01-14T10:07:09.378Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:07:09.379Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"36fd114adae62b7a","local-member-id":"d3d995060bc0a086","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:07:09.379Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:07:09.379Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:07:09.379Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:07:09.379Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d3d995060bc0a086","local-member-attributes":"{Name:ubuntu-20-agent ClientURLs:[https://10.132.0.4:2379]}","request-path":"/0/members/d3d995060bc0a086/attributes","cluster-id":"36fd114adae62b7a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:07:09.379Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:07:09.380Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:07:09.380Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T10:07:09.381Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.132.0.4:2379"}
	{"level":"info","ts":"2023-01-14T10:07:09.381Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  10:15:18 up 57 min,  0 users,  load average: 0.18, 0.57, 0.43
	Linux ubuntu-20-agent 5.15.0-1027-gcp #34~20.04.1-Ubuntu SMP Mon Jan 9 18:40:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [2ff1699b4878] <==
	* W0114 10:08:29.735793       1 handler_proxy.go:105] no RequestInfo found in the context
	E0114 10:08:29.735839       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0114 10:08:29.735846       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0114 10:08:29.736966       1 handler_proxy.go:105] no RequestInfo found in the context
	E0114 10:08:29.737043       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0114 10:08:29.737055       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0114 10:10:29.736317       1 handler_proxy.go:105] no RequestInfo found in the context
	E0114 10:10:29.736354       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0114 10:10:29.736360       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0114 10:10:29.737430       1 handler_proxy.go:105] no RequestInfo found in the context
	E0114 10:10:29.737513       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0114 10:10:29.737535       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0114 10:12:12.170806       1 handler_proxy.go:105] no RequestInfo found in the context
	W0114 10:12:12.170806       1 handler_proxy.go:105] no RequestInfo found in the context
	E0114 10:12:12.170895       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0114 10:12:12.170902       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0114 10:12:12.170906       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0114 10:12:12.172033       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0114 10:12:19.702602       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.183.89:443: connect: connection refused
	E0114 10:12:19.702941       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.183.89:443: connect: connection refused
	E0114 10:12:19.708055       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.183.89:443: connect: connection refused
	E0114 10:12:19.728858       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.183.89:443: connect: connection refused
	
	* 
	* ==> kube-controller-manager [604a7cca50ac] <==
	* I0114 10:07:56.669462       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:08:23.011401       1 job_controller.go:510] enqueueing job gcp-auth/gcp-auth-certs-create
	E0114 10:08:23.015508       1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0114 10:08:23.017363       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0114 10:08:23.034137       1 job_controller.go:510] enqueueing job gcp-auth/gcp-auth-certs-create
	I0114 10:08:24.006060       1 job_controller.go:510] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0114 10:08:24.023237       1 job_controller.go:510] enqueueing job gcp-auth/gcp-auth-certs-patch
	E0114 10:08:26.345833       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0114 10:08:26.682841       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0114 10:08:56.352425       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0114 10:08:56.695296       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0114 10:09:26.358600       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0114 10:09:26.705664       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0114 10:09:56.195344       1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0114 10:09:56.197531       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0114 10:09:56.365404       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0114 10:09:56.716232       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0114 10:10:26.371474       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0114 10:10:26.727043       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0114 10:10:56.377359       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0114 10:10:56.737522       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0114 10:11:26.383986       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0114 10:11:26.748518       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0114 10:11:56.390598       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0114 10:11:56.760126       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [4f53bf8a8305] <==
	* I0114 10:07:27.611974       1 node.go:163] Successfully retrieved node IP: 10.132.0.4
	I0114 10:07:27.612044       1 server_others.go:138] "Detected node IP" address="10.132.0.4"
	I0114 10:07:27.612070       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:07:27.631260       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:07:27.631301       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0114 10:07:27.631313       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0114 10:07:27.631329       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0114 10:07:27.631364       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:07:27.631508       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:07:27.631705       1 server.go:661] "Version info" version="v1.25.3"
	I0114 10:07:27.631724       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:07:27.632227       1 config.go:317] "Starting service config controller"
	I0114 10:07:27.632242       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:07:27.632259       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:07:27.632261       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:07:27.632366       1 config.go:444] "Starting node config controller"
	I0114 10:07:27.632377       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:07:27.732673       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0114 10:07:27.732770       1 shared_informer.go:262] Caches are synced for node config
	I0114 10:07:27.732788       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [9fb1ebe94b48] <==
	* E0114 10:07:11.194036       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0114 10:07:11.194048       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0114 10:07:11.193962       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0114 10:07:11.194114       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0114 10:07:11.193929       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0114 10:07:11.194159       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0114 10:07:11.194203       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0114 10:07:11.194226       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0114 10:07:11.194320       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0114 10:07:11.194360       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0114 10:07:12.003311       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0114 10:07:12.003342       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0114 10:07:12.094781       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0114 10:07:12.094810       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0114 10:07:12.107338       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0114 10:07:12.107444       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0114 10:07:12.129544       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0114 10:07:12.129574       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0114 10:07:12.197369       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0114 10:07:12.197394       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0114 10:07:12.266708       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0114 10:07:12.266775       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0114 10:07:12.266708       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0114 10:07:12.266805       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0114 10:07:15.191506       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-12-12 17:50:41 UTC, end at Sat 2023-01-14 10:15:18 UTC. --
	Jan 14 10:10:00 ubuntu-20-agent kubelet[17674]: E0114 10:10:00.366805   17674 resource_metrics.go:126] "Error getting summary for resourceMetric prometheus endpoint" err="failed to list pod stats: failed to list all container stats: rpc error: code = Unknown desc = Error response from daemon: No such container: 7e5fd0e68c151c52ee70cce243d5fdfb2b9d8e8ed19a413b814615d1da93e0a2"
	Jan 14 10:10:14 ubuntu-20-agent kubelet[17674]: E0114 10:10:14.417063   17674 remote_runtime.go:1050] "ListContainerStats with filter from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 9d9123abb65f144d7aec82a411e226f0004a5f07dc4ae821d10cb059d1f6c64c" filter="&ContainerStatsFilter{Id:,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jan 14 10:10:14 ubuntu-20-agent kubelet[17674]: E0114 10:10:14.417105   17674 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to list all container stats: rpc error: code = Unknown desc = Error response from daemon: No such container: 9d9123abb65f144d7aec82a411e226f0004a5f07dc4ae821d10cb059d1f6c64c"
	Jan 14 10:11:07 ubuntu-20-agent kubelet[17674]: E0114 10:11:07.699557   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_etcd-ubuntu-20-agent_65715dc9e4cbf94a2dca360adc587df7/etcd/7.log\": no such file or directory" containerName="etcd"
	Jan 14 10:11:07 ubuntu-20-agent kubelet[17674]: E0114 10:11:07.700422   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-controller-manager-ubuntu-20-agent_4a4f6d86d11728f017fab2e2d3b5fef6/kube-controller-manager/7.log\": no such file or directory" containerName="kube-controller-manager"
	Jan 14 10:11:07 ubuntu-20-agent kubelet[17674]: E0114 10:11:07.701124   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-scheduler-ubuntu-20-agent_136b67c9bcacbb6db8fa00666fead41b/kube-scheduler/12.log\": no such file or directory" containerName="kube-scheduler"
	Jan 14 10:12:00 ubuntu-20-agent kubelet[17674]: E0114 10:12:00.970522   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_etcd-ubuntu-20-agent_65715dc9e4cbf94a2dca360adc587df7/etcd/11.log\": no such file or directory" containerName="etcd"
	Jan 14 10:12:00 ubuntu-20-agent kubelet[17674]: E0114 10:12:00.971293   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-controller-manager-ubuntu-20-agent_4a4f6d86d11728f017fab2e2d3b5fef6/kube-controller-manager/15.log\": no such file or directory" containerName="kube-controller-manager"
	Jan 14 10:12:00 ubuntu-20-agent kubelet[17674]: E0114 10:12:00.972045   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-scheduler-ubuntu-20-agent_136b67c9bcacbb6db8fa00666fead41b/kube-scheduler/13.log\": no such file or directory" containerName="kube-scheduler"
	Jan 14 10:12:55 ubuntu-20-agent kubelet[17674]: E0114 10:12:55.227704   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_etcd-ubuntu-20-agent_65715dc9e4cbf94a2dca360adc587df7/etcd/21.log\": no such file or directory" containerName="etcd"
	Jan 14 10:12:55 ubuntu-20-agent kubelet[17674]: E0114 10:12:55.228541   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-controller-manager-ubuntu-20-agent_4a4f6d86d11728f017fab2e2d3b5fef6/kube-controller-manager/18.log\": no such file or directory" containerName="kube-controller-manager"
	Jan 14 10:12:55 ubuntu-20-agent kubelet[17674]: E0114 10:12:55.229180   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-scheduler-ubuntu-20-agent_136b67c9bcacbb6db8fa00666fead41b/kube-scheduler/19.log\": no such file or directory" containerName="kube-scheduler"
	Jan 14 10:13:49 ubuntu-20-agent kubelet[17674]: E0114 10:13:49.485747   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_etcd-ubuntu-20-agent_65715dc9e4cbf94a2dca360adc587df7/etcd/15.log\": no such file or directory" containerName="etcd"
	Jan 14 10:13:49 ubuntu-20-agent kubelet[17674]: E0114 10:13:49.485960   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-controller-manager-ubuntu-20-agent_4a4f6d86d11728f017fab2e2d3b5fef6/kube-controller-manager/18.log\": no such file or directory" containerName="kube-controller-manager"
	Jan 14 10:13:49 ubuntu-20-agent kubelet[17674]: E0114 10:13:49.486082   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-scheduler-ubuntu-20-agent_136b67c9bcacbb6db8fa00666fead41b/kube-scheduler/18.log\": no such file or directory" containerName="kube-scheduler"
	Jan 14 10:14:43 ubuntu-20-agent kubelet[17674]: E0114 10:14:43.741779   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_etcd-ubuntu-20-agent_65715dc9e4cbf94a2dca360adc587df7/etcd/10.log\": no such file or directory" containerName="etcd"
	Jan 14 10:14:43 ubuntu-20-agent kubelet[17674]: E0114 10:14:43.742766   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-controller-manager-ubuntu-20-agent_4a4f6d86d11728f017fab2e2d3b5fef6/kube-controller-manager/11.log\": no such file or directory" containerName="kube-controller-manager"
	Jan 14 10:14:43 ubuntu-20-agent kubelet[17674]: E0114 10:14:43.743678   17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-scheduler-ubuntu-20-agent_136b67c9bcacbb6db8fa00666fead41b/kube-scheduler/20.log\": no such file or directory" containerName="kube-scheduler"
	Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: I0114 10:15:18.726620   17674 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpnm7\" (UniqueName: \"kubernetes.io/projected/99b244b0-02bb-4d7b-8b98-f38c99f1949e-kube-api-access-wpnm7\") pod \"99b244b0-02bb-4d7b-8b98-f38c99f1949e\" (UID: \"99b244b0-02bb-4d7b-8b98-f38c99f1949e\") "
	Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: I0114 10:15:18.726700   17674 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/99b244b0-02bb-4d7b-8b98-f38c99f1949e-tmp-dir\") pod \"99b244b0-02bb-4d7b-8b98-f38c99f1949e\" (UID: \"99b244b0-02bb-4d7b-8b98-f38c99f1949e\") "
	Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: W0114 10:15:18.726982   17674 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/99b244b0-02bb-4d7b-8b98-f38c99f1949e/volumes/kubernetes.io~empty-dir/tmp-dir: clearQuota called, but quotas disabled
	Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: I0114 10:15:18.727117   17674 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99b244b0-02bb-4d7b-8b98-f38c99f1949e-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "99b244b0-02bb-4d7b-8b98-f38c99f1949e" (UID: "99b244b0-02bb-4d7b-8b98-f38c99f1949e"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: I0114 10:15:18.728743   17674 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b244b0-02bb-4d7b-8b98-f38c99f1949e-kube-api-access-wpnm7" (OuterVolumeSpecName: "kube-api-access-wpnm7") pod "99b244b0-02bb-4d7b-8b98-f38c99f1949e" (UID: "99b244b0-02bb-4d7b-8b98-f38c99f1949e"). InnerVolumeSpecName "kube-api-access-wpnm7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: I0114 10:15:18.827361   17674 reconciler.go:399] "Volume detached for volume \"kube-api-access-wpnm7\" (UniqueName: \"kubernetes.io/projected/99b244b0-02bb-4d7b-8b98-f38c99f1949e-kube-api-access-wpnm7\") on node \"ubuntu-20-agent\" DevicePath \"\""
	Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: I0114 10:15:18.827403   17674 reconciler.go:399] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/99b244b0-02bb-4d7b-8b98-f38c99f1949e-tmp-dir\") on node \"ubuntu-20-agent\" DevicePath \"\""
	
	* 
	* ==> storage-provisioner [7c5cbae47eb4] <==
	* I0114 10:07:30.298684       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0114 10:07:30.307931       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0114 10:07:30.307973       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0114 10:07:30.314738       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0114 10:07:30.314894       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent_c56e123b-e8b9-491b-96eb-2e83e3e0c4bc!
	I0114 10:07:30.314907       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8935411-b4a4-460f-9f6b-35ddc99495f4", APIVersion:"v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent_c56e123b-e8b9-491b-96eb-2e83e3e0c4bc became leader
	I0114 10:07:30.415551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent_c56e123b-e8b9-491b-96eb-2e83e3e0c4bc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context minikube describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context minikube describe pod : exit status 1 (71.555314ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context minikube describe pod : exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (323.20s)

                                                
                                    

Test pass (87/144)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 4.06
6 TestDownloadOnly/v1.16.0/binaries 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.25.3/json-events 4.05
13 TestDownloadOnly/v1.25.3/binaries 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.15
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
19 TestBinaryMirror 0.51
20 TestOffline 53.9
22 TestAddons/Setup 54.82
24 TestAddons/parallel/Registry 119.91
27 TestAddons/parallel/HelmTiller 11.57
29 TestAddons/parallel/CSI 39.24
30 TestAddons/parallel/Headlamp 9.06
31 TestAddons/parallel/CloudSpanner 5.23
34 TestAddons/serial/GCPAuth/Namespaces 0.15
35 TestAddons/StoppedEnableDisable 10.72
37 TestCertExpiration 265.73
47 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/StartWithProxy 30.22
49 TestFunctional/serial/AuditLog 0
50 TestFunctional/serial/SoftStart 34.14
51 TestFunctional/serial/KubeContext 0.05
52 TestFunctional/serial/KubectlGetPods 0.09
54 TestFunctional/serial/MinikubeKubectlCmd 0.13
55 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
56 TestFunctional/serial/ExtraConfig 42.71
57 TestFunctional/serial/ComponentHealth 0.07
58 TestFunctional/serial/LogsCmd 1.01
59 TestFunctional/serial/LogsFileCmd 1.06
61 TestFunctional/parallel/ConfigCmd 0.43
62 TestFunctional/parallel/DashboardCmd 7.92
63 TestFunctional/parallel/DryRun 0.2
64 TestFunctional/parallel/InternationalLanguage 0.1
65 TestFunctional/parallel/StatusCmd 0.56
68 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
69 TestFunctional/parallel/ProfileCmd/profile_list 0.27
70 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
71 TestFunctional/parallel/ServiceCmd 11.15
72 TestFunctional/parallel/ServiceCmdConnect 7.37
73 TestFunctional/parallel/AddonsCmd 0.17
74 TestFunctional/parallel/PersistentVolumeClaim 23.34
86 TestFunctional/parallel/MySQL 21.4
90 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
91 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 13.45
92 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.45
95 TestFunctional/parallel/NodeLabels 0.06
99 TestFunctional/parallel/Version/short 0.07
100 TestFunctional/parallel/Version/components 0.44
101 TestFunctional/parallel/License 0.23
102 TestFunctional/delete_addon-resizer_images 0.04
103 TestFunctional/delete_my-image_image 0.02
104 TestFunctional/delete_minikube_cached_images 0.02
109 TestJSONOutput/start/Command 30.55
110 TestJSONOutput/start/Audit 0
112 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
113 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
115 TestJSONOutput/pause/Command 0.58
116 TestJSONOutput/pause/Audit 0
118 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
119 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
121 TestJSONOutput/unpause/Command 0.46
122 TestJSONOutput/unpause/Audit 0
124 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
125 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
127 TestJSONOutput/stop/Command 10.5
128 TestJSONOutput/stop/Audit 0
130 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
131 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
132 TestErrorJSONOutput 0.28
137 TestMainNoArgs 0.07
138 TestMinikubeProfile 34.12
143 TestChangeNoneUser 33.04
146 TestPause/serial/Start 28.73
147 TestPause/serial/SecondStartNoReconfiguration 40.57
148 TestPause/serial/Pause 0.55
149 TestPause/serial/VerifyStatus 0.18
150 TestPause/serial/Unpause 0.48
151 TestPause/serial/PauseAgain 0.58
152 TestPause/serial/DeletePaused 3.23
153 TestPause/serial/VerifyDeletedResources 0.1
167 TestRunningBinaryUpgrade 100.22
169 TestStoppedBinaryUpgrade/Setup 1.74
170 TestStoppedBinaryUpgrade/Upgrade 58.55
171 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
172 TestKubernetesUpgrade 341.11
x
+
TestDownloadOnly/v1.16.0/json-events (4.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (4.061649756s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (4.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
--- PASS: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (85.364337ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------|------|---------|---------------------|----------|
	| Command |              Args              | Profile  | User | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | root | v1.28.0 | 14 Jan 23 10:05 UTC |          |
	|         | -p minikube --force            |          |      |         |                     |          |
	|         | --alsologtostderr              |          |      |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |          |      |         |                     |          |
	|         | --container-runtime=docker     |          |      |         |                     |          |
	|         | --driver=none                  |          |      |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |      |         |                     |          |
	|---------|--------------------------------|----------|------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:05:58
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:05:58.368511   10806 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:05:58.368668   10806 out.go:343] TERM=unknown,COLORTERM=, which probably does not support color
	I0114 10:05:58.368677   10806 out.go:309] Setting ErrFile to fd 2...
	I0114 10:05:58.368685   10806 out.go:343] TERM=unknown,COLORTERM=, which probably does not support color
	I0114 10:05:58.368792   10806 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3824/.minikube/bin
	W0114 10:05:58.368935   10806 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15642-3824/.minikube/config/config.json: open /home/jenkins/minikube-integration/15642-3824/.minikube/config/config.json: no such file or directory
	I0114 10:05:58.369277   10806 out.go:303] Setting JSON to true
	I0114 10:05:58.370128   10806 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2905,"bootTime":1673687853,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:05:58.370189   10806 start.go:135] virtualization: kvm guest
	I0114 10:05:58.372773   10806 out.go:97] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:05:58.372897   10806 notify.go:220] Checking for updates...
	I0114 10:05:58.374858   10806 out.go:169] MINIKUBE_LOCATION=15642
	W0114 10:05:58.372900   10806 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15642-3824/.minikube/cache/preloaded-tarball: no such file or directory
	I0114 10:05:58.377749   10806 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:05:58.379310   10806 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15642-3824/kubeconfig
	I0114 10:05:58.380717   10806 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3824/.minikube
	I0114 10:05:58.382210   10806 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "m01" does not exist.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (4.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (4.05184113s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (4.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
--- PASS: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (84.120436ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------|------|---------|---------------------|----------|
	| Command |              Args              | Profile  | User | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | root | v1.28.0 | 14 Jan 23 10:05 UTC |          |
	|         | -p minikube --force            |          |      |         |                     |          |
	|         | --alsologtostderr              |          |      |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |          |      |         |                     |          |
	|         | --container-runtime=docker     |          |      |         |                     |          |
	|         | --driver=none                  |          |      |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |      |         |                     |          |
	| start   | -o=json --download-only        | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC |          |
	|         | -p minikube --force            |          |      |         |                     |          |
	|         | --alsologtostderr              |          |      |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |          |      |         |                     |          |
	|         | --container-runtime=docker     |          |      |         |                     |          |
	|         | --driver=none                  |          |      |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |      |         |                     |          |
	|---------|--------------------------------|----------|------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:06:02
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:06:02.517384   10830 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:06:02.517602   10830 out.go:343] TERM=unknown,COLORTERM=, which probably does not support color
	I0114 10:06:02.517612   10830 out.go:309] Setting ErrFile to fd 2...
	I0114 10:06:02.517616   10830 out.go:343] TERM=unknown,COLORTERM=, which probably does not support color
	I0114 10:06:02.517747   10830 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3824/.minikube/bin
	W0114 10:06:02.517880   10830 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15642-3824/.minikube/config/config.json: open /home/jenkins/minikube-integration/15642-3824/.minikube/config/config.json: no such file or directory
	I0114 10:06:02.518048   10830 out.go:303] Setting JSON to true
	I0114 10:06:02.518807   10830 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2910,"bootTime":1673687853,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:06:02.518879   10830 start.go:135] virtualization: kvm guest
	I0114 10:06:02.521122   10830 out.go:97] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:06:02.521235   10830 notify.go:220] Checking for updates...
	I0114 10:06:02.522920   10830 out.go:169] MINIKUBE_LOCATION=15642
	W0114 10:06:02.521231   10830 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15642-3824/.minikube/cache/preloaded-tarball: no such file or directory
	I0114 10:06:02.526049   10830 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:06:02.527600   10830 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15642-3824/kubeconfig
	I0114 10:06:02.529413   10830 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3824/.minikube
	I0114 10:06:02.531080   10830 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "m01" does not exist.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.51s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:43039 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.51s)

                                                
                                    
x
+
TestOffline (53.9s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (50.960306881s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (2.936752153s)
--- PASS: TestOffline (53.90s)

                                                
                                    
x
+
TestAddons/Setup (54.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (54.822207488s)
--- PASS: TestAddons/Setup (54.82s)

                                                
                                    
x
+
TestAddons/parallel/Registry (119.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:287: registry stabilized in 9.501479ms
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-kq4cd" [32b72e54-cd00-412d-9956-c5373a71c06c] Running
addons_test.go:289: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00958849s
addons_test.go:292: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-s9fw7" [1ad6757d-2230-4f49-bb63-c55e4bf5d78b] Running
addons_test.go:292: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007119633s
addons_test.go:297: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:302: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:302: (dbg) Done: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.799525405s)
addons_test.go:316: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2023/01/14 10:08:11 [DEBUG] GET http://10.132.0.4:5000
2023/01/14 10:08:11 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:11 [DEBUG] GET http://10.132.0.4:5000: retrying in 1s (4 left)
2023/01/14 10:08:12 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:12 [DEBUG] GET http://10.132.0.4:5000: retrying in 2s (3 left)
2023/01/14 10:08:14 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:14 [DEBUG] GET http://10.132.0.4:5000: retrying in 4s (2 left)
2023/01/14 10:08:18 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:18 [DEBUG] GET http://10.132.0.4:5000: retrying in 8s (1 left)
2023/01/14 10:08:26 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:27 [DEBUG] GET http://10.132.0.4:5000
2023/01/14 10:08:27 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:27 [DEBUG] GET http://10.132.0.4:5000: retrying in 1s (4 left)
2023/01/14 10:08:28 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:28 [DEBUG] GET http://10.132.0.4:5000: retrying in 2s (3 left)
2023/01/14 10:08:30 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:30 [DEBUG] GET http://10.132.0.4:5000: retrying in 4s (2 left)
2023/01/14 10:08:34 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:34 [DEBUG] GET http://10.132.0.4:5000: retrying in 8s (1 left)
2023/01/14 10:08:42 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:43 [DEBUG] GET http://10.132.0.4:5000
2023/01/14 10:08:43 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:43 [DEBUG] GET http://10.132.0.4:5000: retrying in 1s (4 left)
2023/01/14 10:08:44 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:44 [DEBUG] GET http://10.132.0.4:5000: retrying in 2s (3 left)
2023/01/14 10:08:46 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:46 [DEBUG] GET http://10.132.0.4:5000: retrying in 4s (2 left)
2023/01/14 10:08:50 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:50 [DEBUG] GET http://10.132.0.4:5000: retrying in 8s (1 left)
2023/01/14 10:08:58 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:58 [DEBUG] GET http://10.132.0.4:5000
2023/01/14 10:08:58 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:58 [DEBUG] GET http://10.132.0.4:5000: retrying in 1s (4 left)
2023/01/14 10:08:59 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:08:59 [DEBUG] GET http://10.132.0.4:5000: retrying in 2s (3 left)
2023/01/14 10:09:01 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:01 [DEBUG] GET http://10.132.0.4:5000: retrying in 4s (2 left)
2023/01/14 10:09:05 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:05 [DEBUG] GET http://10.132.0.4:5000: retrying in 8s (1 left)
2023/01/14 10:09:13 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:16 [DEBUG] GET http://10.132.0.4:5000
2023/01/14 10:09:16 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:16 [DEBUG] GET http://10.132.0.4:5000: retrying in 1s (4 left)
2023/01/14 10:09:17 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:17 [DEBUG] GET http://10.132.0.4:5000: retrying in 2s (3 left)
2023/01/14 10:09:19 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:19 [DEBUG] GET http://10.132.0.4:5000: retrying in 4s (2 left)
2023/01/14 10:09:23 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:23 [DEBUG] GET http://10.132.0.4:5000: retrying in 8s (1 left)
2023/01/14 10:09:31 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:33 [DEBUG] GET http://10.132.0.4:5000
2023/01/14 10:09:33 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:33 [DEBUG] GET http://10.132.0.4:5000: retrying in 1s (4 left)
2023/01/14 10:09:34 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:34 [DEBUG] GET http://10.132.0.4:5000: retrying in 2s (3 left)
2023/01/14 10:09:36 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:36 [DEBUG] GET http://10.132.0.4:5000: retrying in 4s (2 left)
2023/01/14 10:09:40 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:40 [DEBUG] GET http://10.132.0.4:5000: retrying in 8s (1 left)
2023/01/14 10:09:48 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:53 [DEBUG] GET http://10.132.0.4:5000
2023/01/14 10:09:53 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:53 [DEBUG] GET http://10.132.0.4:5000: retrying in 1s (4 left)
2023/01/14 10:09:54 [ERR] GET http://10.132.0.4:5000 request failed: Get "http://10.132.0.4:5000": dial tcp 10.132.0.4:5000: connect: connection refused
2023/01/14 10:09:54 [DEBUG] GET http://10.132.0.4:5000: retrying in 2s (3 left)
addons_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (119.91s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.57s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:413: tiller-deploy stabilized in 7.132443ms
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-pg8sd" [930aa4f6-25af-4b84-9939-c484716e2fdf] Running
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008337196s
addons_test.go:430: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:430: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.334910006s)
addons_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.57s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:518: csi-hostpath-driver pods stabilized in 5.931755ms
addons_test.go:521: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:526: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:531: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:536: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [e02723c8-afd8-441f-8901-8daa522f78f2] Pending
helpers_test.go:342: "task-pv-pod" [e02723c8-afd8-441f-8901-8daa522f78f2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod" [e02723c8-afd8-441f-8901-8daa522f78f2] Running
addons_test.go:536: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.00642136s
addons_test.go:541: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:546: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:551: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:557: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:563: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [53f15f4e-b552-4246-9454-c0e277aa8a16] Pending
helpers_test.go:342: "task-pv-pod-restore" [53f15f4e-b552-4246-9454-c0e277aa8a16] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [53f15f4e-b552-4246-9454-c0e277aa8a16] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 15.005988171s
addons_test.go:583: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:587: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:591: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:595: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:595: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.411213984s)
addons_test.go:599: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (9.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:774: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:774: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1: (1.053774557s)
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-764769c887-nfktj" [3a262dea-c79a-4311-b1bd-2eeeba313044] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:342: "headlamp-764769c887-nfktj" [3a262dea-c79a-4311-b1bd-2eeeba313044] Running
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 8.006582672s
--- PASS: TestAddons/parallel/Headlamp (9.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:342: "cloud-spanner-emulator-7d7766f55c-ng2xw" [97f00a06-f082-4cce-9190-23ad4382ceb5] Running
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006190275s
addons_test.go:798: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:607: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:621: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.72s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:139: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:139: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.498580567s)
addons_test.go:143: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.72s)

                                                
                                    
x
+
TestCertExpiration (265.73s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.781046837s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (1m7.792828032s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.158989171s)
--- PASS: TestCertExpiration (265.73s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15642-3824/.minikube/files/etc/test/nested/copy/10794/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (30.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (30.219599162s)
--- PASS: TestFunctional/serial/StartWithProxy (30.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (34.136630884s)
functional_test.go:656: soft start took 34.137190778s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.713327434s)
functional_test.go:754: restart took 42.713437163s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p minikube logs: (1.009685139s)
--- PASS: TestFunctional/serial/LogsCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd2379222705/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd2379222705/001/logs.txt: (1.056658046s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (68.418562ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (68.784859ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2023/01/14 10:23:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 51737: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.92s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (105.421294ms)

                                                
                                                
-- stdout --
	* minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-3824/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3824/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:23:42.881104   52182 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:23:42.881246   52182 out.go:343] TERM=unknown,COLORTERM=, which probably does not support color
	I0114 10:23:42.881258   52182 out.go:309] Setting ErrFile to fd 2...
	I0114 10:23:42.881266   52182 out.go:343] TERM=unknown,COLORTERM=, which probably does not support color
	I0114 10:23:42.881385   52182 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3824/.minikube/bin
	I0114 10:23:42.881716   52182 out.go:303] Setting JSON to false
	I0114 10:23:42.882784   52182 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3970,"bootTime":1673687853,"procs":383,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:23:42.882857   52182 start.go:135] virtualization: kvm guest
	I0114 10:23:42.885793   52182 out.go:177] * minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	W0114 10:23:42.887294   52182 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15642-3824/.minikube/cache/preloaded-tarball: no such file or directory
	I0114 10:23:42.888877   52182 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:23:42.887318   52182 notify.go:220] Checking for updates...
	I0114 10:23:42.892290   52182 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:23:42.894212   52182 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3824/kubeconfig
	I0114 10:23:42.895959   52182 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3824/.minikube
	I0114 10:23:42.897869   52182 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:23:42.899993   52182 config.go:180] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 10:23:42.900258   52182 exec_runner.go:51] Run: systemctl --version
	I0114 10:23:42.903004   52182 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:23:42.905277   52182 out.go:177] * Using the none driver based on existing profile
	I0114 10:23:42.907602   52182 start.go:294] selected driver: none
	I0114 10:23:42.907622   52182 start.go:838] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name:m01 IP:10.132.0.4 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-
device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:23:42.907749   52182 start.go:849] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:23:42.907769   52182 start.go:1598] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0114 10:23:42.908092   52182 out.go:239] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0114 10:23:42.910917   52182 out.go:177] 
	W0114 10:23:42.912619   52182 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0114 10:23:42.914229   52182 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (97.48874ms)

                                                
                                                
-- stdout --
	* minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-3824/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3824/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:23:43.082659   52208 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:23:43.082778   52208 out.go:343] TERM=unknown,COLORTERM=, which probably does not support color
	I0114 10:23:43.082785   52208 out.go:309] Setting ErrFile to fd 2...
	I0114 10:23:43.082790   52208 out.go:343] TERM=unknown,COLORTERM=, which probably does not support color
	I0114 10:23:43.082948   52208 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3824/.minikube/bin
	I0114 10:23:43.083238   52208 out.go:303] Setting JSON to false
	I0114 10:23:43.084268   52208 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3970,"bootTime":1673687853,"procs":383,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:23:43.084344   52208 start.go:135] virtualization: kvm guest
	I0114 10:23:43.087101   52208 out.go:177] * minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	I0114 10:23:43.088793   52208 out.go:177]   - MINIKUBE_LOCATION=15642
	W0114 10:23:43.088706   52208 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15642-3824/.minikube/cache/preloaded-tarball: no such file or directory
	I0114 10:23:43.088712   52208 notify.go:220] Checking for updates...
	I0114 10:23:43.090530   52208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:23:43.092373   52208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-3824/kubeconfig
	I0114 10:23:43.094017   52208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3824/.minikube
	I0114 10:23:43.095630   52208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:23:43.097465   52208 config.go:180] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 10:23:43.097775   52208 exec_runner.go:51] Run: systemctl --version
	I0114 10:23:43.099991   52208 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:23:43.102254   52208 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0114 10:23:43.104020   52208 start.go:294] selected driver: none
	I0114 10:23:43.104047   52208 start.go:838] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name:m01 IP:10.132.0.4 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-
device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:23:43.104196   52208 start.go:849] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:23:43.104215   52208 start.go:1598] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0114 10:23:43.104528   52208 out.go:239] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0114 10:23:43.107387   52208 out.go:177] 
	W0114 10:23:43.109100   52208 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0114 10:23:43.110871   52208 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "194.955233ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "74.606786ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "192.508471ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "73.222429ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-zdvwm" [2d9c9b45-a409-4b51-a3ca-59c5d880f90c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-5fcdfb5cc4-zdvwm" [2d9c9b45-a409-4b51-a3ca-59c5d880f90c] Running
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 10.012040719s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1476: found endpoint: https://10.132.0.4:31538
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1511: found endpoint for hello-node: http://10.132.0.4:31538
--- PASS: TestFunctional/parallel/ServiceCmd (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-5hrb8" [1b2930a2-669c-4179-8ce2-a6f3c25da359] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-connect-6458c8fb6f-5hrb8" [1b2930a2-669c-4179-8ce2-a6f3c25da359] Running
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.008948984s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1585: found endpoint for hello-node-connect: http://10.132.0.4:32241
functional_test.go:1605: http://10.132.0.4:32241: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-5hrb8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.132.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.132.0.4:32241
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.37s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [5a17efac-187e-4dff-8a01-ae53c9a145f3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006685334s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [87f4807b-5068-4b18-ab66-738ce42c9929] Pending
helpers_test.go:342: "sp-pod" [87f4807b-5068-4b18-ab66-738ce42c9929] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:342: "sp-pod" [87f4807b-5068-4b18-ab66-738ce42c9929] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.006521625s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.411089506s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [adcbe9e8-da2f-4181-9385-c7a3f265eeac] Pending
helpers_test.go:342: "sp-pod" [adcbe9e8-da2f-4181-9385-c7a3f265eeac] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:342: "sp-pod" [adcbe9e8-da2f-4181-9385-c7a3f265eeac] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007070318s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-596b7fcdbf-54fjp" [fc3d6067-f601-4aa5-83ed-fe09994679e6] Pending
helpers_test.go:342: "mysql-596b7fcdbf-54fjp" [fc3d6067-f601-4aa5-83ed-fe09994679e6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:342: "mysql-596b7fcdbf-54fjp" [fc3d6067-f601-4aa5-83ed-fe09994679e6] Running
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.007771796s
functional_test.go:1734: (dbg) Run:  kubectl --context minikube exec mysql-596b7fcdbf-54fjp -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context minikube exec mysql-596b7fcdbf-54fjp -- mysql -ppassword -e "show databases;": exit status 1 (159.421586ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context minikube exec mysql-596b7fcdbf-54fjp -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context minikube exec mysql-596b7fcdbf-54fjp -- mysql -ppassword -e "show databases;": exit status 1 (133.876871ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context minikube exec mysql-596b7fcdbf-54fjp -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context minikube exec mysql-596b7fcdbf-54fjp -- mysql -ppassword -e "show databases;": exit status 1 (134.562832ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context minikube exec mysql-596b7fcdbf-54fjp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.446280448s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.445026839s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:minikube
--- PASS: TestFunctional/delete_addon-resizer_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (30.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (30.545152532s)
--- PASS: TestJSONOutput/start/Command (30.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.5s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.49861568s)
--- PASS: TestJSONOutput/stop/Command (10.50s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.830993ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"af1fe26f-63f5-4ff5-aafd-20b4b63ed761","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9b10ecee-7403-4a49-ae66-a734e64cf1ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15642"}}
	{"specversion":"1.0","id":"d409a338-a42f-40e7-8bf7-701a261bd84f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bacb1247-5332-4ec2-9f41-43712f998fe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15642-3824/kubeconfig"}}
	{"specversion":"1.0","id":"8407fc04-7d69-474b-940b-7a7396235450","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3824/.minikube"}}
	{"specversion":"1.0","id":"3814b8d6-a7a1-4b71-a465-882d1ede6479","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"aeae1b22-1c9c-40f5-8665-312edb21df5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (34.12s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.595207527s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (16.184565499s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (2.56460238s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.12s)

                                                
                                    
x
+
TestChangeNoneUser (33.04s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:47: (dbg) Run:  /usr/bin/env CHANGE_MINIKUBE_NONE_USER=true out/minikube-linux-amd64 start --wait=false --driver=none --bootstrapper=kubeadm
none_test.go:47: (dbg) Done: /usr/bin/env CHANGE_MINIKUBE_NONE_USER=true out/minikube-linux-amd64 start --wait=false --driver=none --bootstrapper=kubeadm: (14.240457976s)
none_test.go:52: (dbg) Run:  out/minikube-linux-amd64 delete
none_test.go:52: (dbg) Done: out/minikube-linux-amd64 delete: (2.434790342s)
none_test.go:57: (dbg) Run:  /usr/bin/env CHANGE_MINIKUBE_NONE_USER=true out/minikube-linux-amd64 start --wait=false --driver=none --bootstrapper=kubeadm
none_test.go:57: (dbg) Done: /usr/bin/env CHANGE_MINIKUBE_NONE_USER=true out/minikube-linux-amd64 start --wait=false --driver=none --bootstrapper=kubeadm: (13.788615596s)
none_test.go:62: (dbg) Run:  out/minikube-linux-amd64 status
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (2.382704888s)
--- PASS: TestChangeNoneUser (33.04s)

                                                
                                    
x
+
TestPause/serial/Start (28.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (28.730196946s)
--- PASS: TestPause/serial/Start (28.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.57s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (40.56940409s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.57s)

                                                
                                    
x
+
TestPause/serial/Pause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.55s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.18s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (175.085679ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.18s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.48s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.58s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.58s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.23s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (3.225347059s)
--- PASS: TestPause/serial/DeletePaused (3.23s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.1s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.10s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (100.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.6.2.3974287437.exe start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.6.2.3974287437.exe start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (36.690585416s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (59.375501423s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (2.137128069s)
--- PASS: TestRunningBinaryUpgrade (100.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (58.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.6.2.1459742876.exe start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.6.2.1459742876.exe start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (17.911263629s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.6.2.1459742876.exe -p minikube stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.6.2.1459742876.exe -p minikube stop: (5.72088128s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (34.915345236s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (58.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestKubernetesUpgrade (341.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (44.313197279s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.426162529s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (109.442088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m17.986923327s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.16.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.16.0 --driver=none --bootstrapper=kubeadm: exit status 106 (92.830791ms)

                                                
                                                
-- stdout --
	* minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-3824/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3824/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (25.590843097s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (2.536388469s)
--- PASS: TestKubernetesUpgrade (341.11s)

                                                
                                    

Test skip (56/144)

Order skiped test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
5 TestDownloadOnly/v1.16.0/cached-images 0
7 TestDownloadOnly/v1.16.0/kubectl 0
11 TestDownloadOnly/v1.25.3/preload-exists 0
12 TestDownloadOnly/v1.25.3/cached-images 0
14 TestDownloadOnly/v1.25.3/kubectl 0
18 TestDownloadOnlyKic 0
25 TestAddons/parallel/Ingress 0
28 TestAddons/parallel/Olm 0
36 TestCertOptions 0
38 TestDockerFlags 0
39 TestForceSystemdFlag 0
40 TestForceSystemdEnv 0
41 TestKVMDriverInstallOrUpdate 0
42 TestHyperKitDriverInstallOrUpdate 0
43 TestHyperkitDriverSkipUpgrade 0
44 TestErrorSpam 0
53 TestFunctional/serial/CacheCmd 0
66 TestFunctional/parallel/MountCmd 0
77 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
78 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
79 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
80 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
81 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
82 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
83 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
84 TestFunctional/parallel/SSHCmd 0
85 TestFunctional/parallel/CpCmd 0
87 TestFunctional/parallel/FileSync 0
88 TestFunctional/parallel/CertSync 0
93 TestFunctional/parallel/DockerEnv 0
94 TestFunctional/parallel/PodmanEnv 0
96 TestFunctional/parallel/ImageCommands 0
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0
105 TestGvisorAddon 0
106 TestIngressAddonLegacy 0
133 TestKicCustomNetwork 0
134 TestKicExistingNetwork 0
135 TestKicCustomSubnet 0
136 TestKicStaticIP 0
139 TestMountStart 0
140 TestMultiNode 0
141 TestNetworkPlugins 0
142 TestNoKubernetes 0
154 TestPreload 0
155 TestScheduledStopWindows 0
156 TestScheduledStopUnix 0
157 TestSkaffold 0
160 TestStartStop/group/old-k8s-version 0.18
161 TestStartStop/group/newest-cni 0.18
162 TestStartStop/group/default-k8s-diff-port 0.18
163 TestStartStop/group/no-preload 0.18
164 TestStartStop/group/disable-driver-mounts 0.19
165 TestStartStop/group/embed-certs 0.18
166 TestInsufficientStorage 0
173 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:100: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:118: None driver has no cache
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
aaa_download_only_test.go:100: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:118: None driver has no cache
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:158: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:455: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:32: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:75: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:138: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1034: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:50: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1647: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1690: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1851: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1882: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:451: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:538: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:288: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1943: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy (0s)

                                                
                                                
=== RUN   TestIngressAddonLegacy
ingress_addon_legacy_test.go:30: skipping: none driver does not support ingress
--- SKIP: TestIngressAddonLegacy (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:39: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:45: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.18s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.18s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:291: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard